What’s the optimal number of interfaces for an Iguana instance?

We often get asked what size server clients should buy to run Iguana on and how many channels per instance is optimal.

The answer is that it depends.

There are many variables that come into the equation:

  1. What your interfaces do and how they are written.
  2. The hardware you are running like the type of hard discs,CPU. Are you in a virtualized environment? What operating system?
  3. External factors like the databases being fed into or from, network latency etc.

Anecdotally from what we have seen in the field it seems that CPU is seldom maxed out. Usually it’s worth spending more budget on fast discs and make sure the network is fast.

Fortunately because the channel API in Iguana which makes it really easy to create, delete and alter channels in Iguana programmatically then it is easy to whip up a realistic test instance of Iguana and make real empirical measurement of what kind of horse power you need and how to correctly scale your hardware and other system components.

I will show the script that can be used to set up a bank of LLP listening channels feeding into say a translator populating a database.  It very simple what we are doing:

  1. I define a channel that represents a typical LLP->To Translator database.  Just working with the one defined in the demo channel 2 works.
  2. I make a simple script which clones N instances of those channels.
  3. That script also alters the listening port for these channels to be part of sequence from 6000 to 6000 + N.

The script I use for cloning and modifying the channels is not general purpose but you should be able to modify it for your needs. If you had a different scenario that involved say a bank of X12 feeds from a FTP server you would probably want to have a different input directory for each feed.  Fortunately auto-completion makes it easy to modify the script to alter the part of the channel configuration you need to.

So assuming you have a channel set up already doing say an LLP->Translator interface, then the following script can be used to clone it N times.  The script will copy the channel and modify the listening LLP ports into the sequence 6000->6000+N.

Here it is, it’s only 29 lines of code and if you are comfortable with the translator already it should be easy to follow:

require 'iguanaServer'

function ModifyClone(C, F)
     local Index = tonumber(C.channel.name:S():sub(5))
     C.channel.from_llp_listener.port = 5999 + Index
end

function CloneChannel(C, Name, Count, Live)
     for i = 1, Count do
          C:cloneChannel{name=Name,new_name="Copy"..i, configurator=ModifyClone, live=Live}
     end
end

function DeleteCloneChannels(C, Count, Live) — Clones named Copy1, Copy2 etc.
     local L = C:listChannels{live=true}
     for i = 1, L.IguanaStatus:childCount('Channel') do
          local ChanName = L.IguanaStatus:child('Channel',i).Name:S()
          if ChanName:sub(1, 4) == 'Copy' and #ChanName > 4 then
               C:removeChannel{name=ChanName, live=Live}
          end
     end
end

function main(Data)
     local C = iguanaServer.connect{username='admin', password='password', live=true}
     local Count = 2
     --DeleteCloneChannels(C, Count, true)
     --CloneChannel(C, '02-Socket to Database', Count, true)
end

You’ll need to change the Count variable to be the number of cloned channels you want and to uncomment the DeleteCloneChannels and CloneChannel calls in the main functions for this code to work. The name of the channel to clone is passed in as the second argument to CloneChannel.

The code uses the channel API to clone the channels and ModifyClone function to modify the channels to use different LLP ports.  The code should be simple enough to read and understand and modify it for a scenario more specific to your needs.  This live screen shot may be helpful in further understanding the code;

screenshot

It does take a while to chug through creating this many channels. But it beats creating them by hand. This script makes it a doddle to go and create other Iguana instances which can be used to simulate incoming production load.

Once you have your simulated Iguana instances it’s going to be necessary to look at the standard OS tools to see what resources are under stress to establish what the optimal number of channels is for a given type of interface under load.

Another key concept – Isolating Parts of the System

This is an area I find a lot of clients could do with a better grasp on. It’s very helpful to think of Iguana as just one part of an overall pipeline which includes not just Iguana but other things like the applications one is talking to, databases etc.

To properly performance tune a system a key concept is to isolate each part of the system and measure the maximum throughput.  For instance some time ago we had a client believe that the performance of Iguana’s HTTP posting when there were multiple channels was slow. This resulted a lot angst and unnecessary review of that code in Iguana.

I had a look at the problem and asked the client if they could construct a pure console application in C# to test the maximum throughput of the end application to see what the theoretical maximum throughput we were shooting for.  They were very helpful and quickly put it together.

As it turned out, this was the key to solving the problem. It suddenly became obvious that the C# application was the bottleneck. The client was then able to productively focus their energies optimizing their application and so the problem was solved. Everyone was happy.

So this is one of things I recommend any one should do when testing a large scale system with Iguana for performance. It will help you understand where you have contention that you may not initially be aware of such as in your database. You can easily do things like comment out the actual code in your interface which hits the database to test performance with and without database inserts etc.

Happy system tuning!  Feel free to ask any questions you have about this.

Leave A Comment?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.