One of the cool things about Iguana is that it is actually a fully fledged development environment. Because of this you can do so much more with it than you can with a traditional interface engine. For example it’s easy to build mini applications and utilities on top of Iguana.
We noticed that many of our customers were implementing very similar utilities to manage what they do with Iguana. This inspired us to build and share the Iguana apps that are available in this repository.
Simply add this repository from github https://github.com/interfaceware/iguana-apps and import the channels that you are interested in. Then review the comments in the code, and read the corresponding sections (below) in this article. These apps were originally created for Iguana 5, so you may also find the Iguana Apps (Iguana 5 documentation) section interesting.
Bed Monitor [top]
The application consists of three channels:
- Bed Monitor – 1.Fake ADT Feed: Generates a dummy ADT feed.
- Bed Monitor – 2.ADT In: Processes this feed and populates a SQLite database with the bed status information.
- Bed Monitor – 3.Web Dashboard: Presents this information in the web dashboard.
This is a simple dashboard application for viewing bed statues in an (imaginary) ER department. It’s a good starting point for creating custom dashboards, just adapt the code for your own needs.
To see the dashboard in action just run all three channels, and connect to the bedmonitor url: http://l<iguana address>:<port>/bedmonitor/ (default = http://localhost:6544/bedmonitor/).
The dashboard will take a few minutes to fully to populate, as the (imaginary) patients continue to arrive.
Tip: You can hover over the HTTPS component to find your bedmonitor url:
Channel Manager [top]
The application consists of one channel:
- Channel Manager: This is the complete channel manager app.
The Channel Manager application has been superseded by Iguana 6’s built in ability to import and export channels.
However the (Git) repository structure we are using in Iguana 6 is different from the one we used for the Iguana 5 Channel Manager. This means Iguana 6 cannot import channels from repositories created by the Iguana 5 Channel Manager.
To get around us we updated the channel manager to you allow you to import from version 5 repositories. However there is a catch, the save milestone API changed in Iguana 6 so that it requires a list of files. This version of the channel manager works around this change by not saving a milestone. In some ways this is better since you can then check what has been imported and make sure it doesn’t overwrite any useful modules you have.
The version 6 Channel Manager is really only intended to help you migrate existing Iguana 5 channels.
Monitor Iguana [top]
The application consists of two channels:
- Monitor – 1.Agent: This is the channel that posts status home.
- Monitor – 2.Dashboard: Presents this information in the web dashboard.
A common problem for any vendor of significant size is how to monitor a large number of Iguana instances that are located behind firewalls at customer sites. We created this monitoring app to solve exactly this problem.
How it works is simple:
- On each monitored instance of Iguana you install a Agent Monitor channel as a From Translator channel.
- Every few minutes this wakes up, queries the status of the local instance of Iguana using the monitor API.
- It then uses HTTP to post that data back to a central instance of Iguana.
- The central instance of Iguana listens for incoming HTTP requests from the agents and logs that data into a simple SQLite database.
- The same central instance also is used to serve up an HTML dashboard which shows the status of the Iguana instances.
Here’s a screenshot of the dashboard:
Only one row is showing in this case. If you click on that row you’ll see all the low level health statistics of the particular Iguana instance
The exciting thing is not the pretty dashboard, but what you can do with it. Think about the possibilities:
- The agents are all simple and dumb.
- All the data is easy to get to in a central location, and thanks to Iguana translator – dead easy to manipulate.
- So that makes it simple to run automated checks on the data so that you can power automated alerts which instead of bombarding you with incomprehensible email notification spam can instead point to this dashboard.
- You can customize what is displayed in the dashboard to your heart’s delight.
Regression Test [top]
The application consists of one channel:
- Regressions: xxx.
When you build interfaces in the Iguana Translator, you’re effectively testing as you go. The Translator runs your sample data through your scripts in real time, so you know instantly if your code works, and you also know instantly if you have an error. If you’ve seen your interface perform correctly against all your sample data, you know you can put it into production with confidence.
But what about when you make a change to an interface, or a change to your environment? You want to be sure your change hasn’t broken your interface, and you especially want to be sure your interface is producing the same results. This is what the regression testing app is for!
The app tests the message filter in an Iguana channel. The first time you run the app, it runs your full set of sample messages through your filter, and saves the results on disk. Any time you run it after that, it does the same thing, and compares the current results with the saves ones. If there’s any difference between an expected result and an actual one, the app reports a test failure.
How to get started:
Inside the new project, edit regressions.config (see the screenshot below) to set the value of config.Worktank. This is the folder where the app will save the expected test results, and it must exist before you run the app. On a Windows computer, the line might look something like this:
- config.Worktank = “C:\Good_Folder\Iguana_Expected”
Save a milestone and start the channel. After this, you should be able to visit the main page of the app itself. You will see a list of channels that have both message filters and sample data.
Click one of the channels, and the app will tell you you need to generate a set of expected results before it can run tests. Click that link. After a pause, you should see your first set of results. Every test will show a pass, because the expected results and the actual ones were generated at the same time.
If there are any test failures, they’ll show at the top of the list.
If you click the “Inspect” link for an individual test, you’ll see the expected and actual results on the same screen. If the test passed, they’ll be the same. If it failed, you’ll see the differences between the actual and expected highlighted in red and green.
If a test fails, but that’s because your interface is really supposed to be generating something different, you can edit the expected results right on this screen. Click in the text of the expected result, edit it, and as soon as you click outside the text, the changed result will be updated on disk.
Try editing some expected results to change passes into failures and vice versa. And once you’ve got your tests running, make sure to re-run them frequently.
Unit Test [top]
The application consists of two channels:
- Unit Test – 1. GUI: xxx.
- Unit Test – 2. Tests: xxx.
One thing that became pretty darned obvious to us early on with these apps is that a solid regression testing system is a necessity.
One area I felt the pain was in some of the work I did with refactoring the first version of the Channel Manager. I’d fix a bug on Windows and introduce a bug on Mac OS X, then go edit stuff on a Mac and break it on Windows, and so on. Putting in a file abstraction layer (Iguana 5 documentation) to smooth over some of the differences between Windows and the rest of world helped, but one truth remains: for non trivial development, you need to have regression testing in place. Without it, it’s tough to know what you’re breaking as you develop new features.
We have a lot of regression testing in place in the core of Iguana, but we don’t have a really nice blueprint for how to do unit testing with Lua in the Iguana translator. So when our QA engineer, Wade, announced he’d run out of work last week, I taught him how to write an Iguana web app and pointed him at the spinner library Bret wrote for our first regression testing application. In less than a day Wade had a working prototype of what has the potential to be an amazing regression testing system.
The spinner library (Iguana 5 documentation) basically makes it a doddle to spin up a translator instance on a remote Iguana instance, populate it with Lua code, and breathe the beastie into life (Prometheus unbound anyone ? ).
- Loads the unit test off disk.
- Installs it on to a mini farm of Iguana servers running on Windows, Linux and Mac.
- Call the unit test on each server.
- Brings back the results and reports them. Ta da!
Obviously it’s a starting point, since we want to make this run automatically as commits come into the repository. But we should have no trouble doing all sorts of neat things with it. Because it’s so easy to grab data using HTTP, it should be possible to point it at an arbitrary repo with Iguana channels in it and have it run tests.
The application is in the Iguana app repository now. Wade has made it configurable so you can add as many Iguana hosts to test against as you like:
Wade has also styled the output to go into a data table:
We may change this over to use a tree control later. Wade is working on adding support for triggering the unit test suite when GIT commits are made.
Web Service [top]
The application consists of two channels:
- Webservice – 1. Service: xxx.
- Webservice – 2. Client: xxx.
You can think of this project as ‘SOAP lite’. It shows how one can build out an web service API that works along RESTful practices – but has a way to catalogue what calls are available. Like SOAP it’s possible to connect to the web service and create a stub on the fly which can be used to call the exposed web service methods.
Unlike SOAP it’s not a heavy implementation. These are the key points:
- Type information is not overwhelming specified in the same detail as it is for SOAP. Over specifying type information is what makes SOAP brittle.
- The methods can be called using any HTTP client without requiring a specialized client to do it. Unlike SOAP it’s not hard to call these methods – there aren’t lots of bizarre little rules to make things not work.
- With SOAP it’s necessary for the client to pull the whole WSDL (Web Service Definition Language) file at once and parse it in it’s entirety which makes SOAP slow and cumbersome. The design of this approach means we can optimize it to only pull down some of the information.
- Documentation of the web API is foremost in the design. The emphasis is on making it easy for a human being to understand rather than a computer – since (for now at least) human beings write code.
We’re actively looking for one of our OEM partners to work with in terms of putting this into the field in a real application – there are some optimizations and tweaks that we could make to turn this into something that will really turn this into a showcase of how integration can be done.
If you think about it, this is the ultimate:
- A clear public API for an application – so upgrading is easy.
- Only small amounts of custom external Lua code needed for each custom interface – easy to maintain.
- Simple to learn – easy to scale teams up and down to interface a specific product.
- Complete random access to the data model of your application – perfect control for the integration engineer.
The approach can be layered on top of any pre-existing application – SOAP and .NET, Microsoft Azure, Java JSP pages, Ruby on Rails since you can craft this web service as an intermediary to those backend layers.
The web service now allows you to edit the help. The cool thing about this that it suddenly has become very easy to write help for any Iguana translator module. The editor hasn’t been styled yet but it’s quite functional, here’s the view mode:
And here’s the edit mode:
If for nothing else it’s a great tool to write help for your own modules. Currently it stores the help in JSON files as part of the project.
We’ve had a go at wrapping up the RESTful API from a simple CRM application called Highrise from Basecamp. The web service supports methods that can be accessed like a normal RESTful webservice like so:
But where it really get’s fun is where you can connect to the service using a simple Lua client Kevin wrote which will create Lua functions on the fly which have help defined. These allow an interface programmer to auto-discover the methods of the web service, call them and see their help within the translator. Here’s an example screen shot:
This shows the best practice in building out a really intuitive easy to use API for integration into an application.
The next stages for this project will involve using the new DBS grammar format (Iguana 5 documentation) to make it easy to supply structured data which can used to populate records with these web service calls. There are definitely performance optimizations we could make to keep things fast with even a big API.
One question which comes up is how does this compare to something like RAML? RAML is a standard which also attempts to go down the SOAP path for RESTful based JSON APIs by having a standard for defining them based on the YAML format. RAML makes the same mistakes as SOAP in my opinion for these reasons:
- It goes overboard in specifying low level details about what kind of HTTP methods a web server supports. I don’t think this makes much sense. If this is an API to my EMR then I want to get information related to the domain of this EMR – I don’t want to thinking in terms of whether or not this interface supports HTTP ‘PUT’, ‘GET’, ‘DELETE’ operations. HTTP return code specified in exhaustive detail isn’t useful. This is all low level nonsense that shouldn’t matter to me. A decent RESTful interface should work with both POST or GET and the parameters should be possible to pass as either GET or POST parameters. While calling a RESTful API of an EMR I should be working at higher level of abstraction.
- It is not valid thinking just because you have a excessively complex ‘API contract’ that the underlying implementations will conform to that contract and that software out there will correctly implement all the nuances of that contract. It gives a false level of security. Apply the KISS principle.
- More complexity for features that don’t offer value means more mental bandwidth consumed by cruft that doesn’t add value to your core task. It’s a common enterprise software ‘anti-pattern’ where developers waste their time become experts of complex technology of the day and make bloated software instead of putting that effort into making their end products easy to use and fixing the core problem. Put it another way – Apple would never have invented the iPhone if they had left it up to a couple of ‘UML architects’…
- Also more complexity means less time to optimize performance since effort is consumed by implementing all those features that didn’t add value in the first place.
- Complex standards are hard to make truly inter-operable. What happens with these is that one implementation becomes the so called ‘defacto’ standard. That’s awesome if you are the vendor that invented and/or promoted the complex standard. Not so great if you bought into the technology in the first place. It’s a game which gets played out again and again and there always seems to be a new generation of people that fall for it. RAML looks very much positioned to tie you into a particular vendor which is the lead contributor to this standard.
It all reminds me of back in 1995 when COM, DCOM and ActiveX were all the rage and the amount of effort I spent learning about single apartment versus multi-apartment threading models thinking there was some deep wisdom in those choices that mastery of would make me an expert programmer. Eh no – just arbitrary meaningless complexity for it’s own sake.
To follow progress and talk about this app please subscribe or talk on this forum.