Load balancing using Iguana Channels

This is a little example to show how Iguana channels can be utilized to create a load-balancer.

This can balance between any number of inbound channels not necessarily belonging to same Iguana instance.

Note: We balance Iguana Channels, not Iguana Instances as our atomic operands, and these channels can belong to various Iguana instances.

The approach is based on the example ‘Monitoring Queue Size’.

If you aren’t interested in the discussion about the various principles of load balancing as they apply with Iguana 5 Translator, then skip down to the sample code.

Discussion:

Iguana 5 Translator can load balance listening inbound channels.

Let’s see first what balancing is: There are numerous scenarios, but one basic scenario says, engage next processor when processors in use become saturated, and take it back off when load goes down.

In other words load balancing is a process of increasing reliability through processor redundancy.

Outbound Iguanachannel balancing is not covered by this example. However we can take advantage of the same approach, should we wish to build one.

Let’s establish some jargon. Let’s call Iguana channel that takes initial workload and needs extra help a ‘balanced’ channel.

Iguana channels which kick in to help a balanced channel we will call ‘balancing’ channels.

The Iguana channel orchestrating operations will be the ‘dispatcher’ channel.

Remember that we are balancing Iguana Channels, not Iguana Instances.

The example below assumes that the first ‘balanced’ channel is on localhost, while 2nd, 3rd, to Nth channels are located on any host. However, it can be adjusted to have 1st channel on any host but it would make the proof-of-concept code too complex for my taste.

One may oppose the location of the dispatcher channel on same host where the balanced channel resides, but it is only matter of your own implementation. The dispatcher channel can be on any machine (think of your potential availability scenario).

Confirmation for above claim can be easily noticed from sample code; just replace localhost reference with some other IP, in function where dispatcher code reads queue levels.

Potential enhancements for this module include ’round robin’ , scheduling, error counts for specific balancing channels, disk free count, or count of already assigned requests. But again, it will become too complex for a proof-of-concept example.

A few words about logs. This may look like a drawback for a brief moment, as our daily log records become distributed de facto.

Daily information may get split among multiple Iguana instances, but don’t forget that the Global Dashboard makes listing between instances a breeze, and the one-click ability to export any Iguana instance logs into CSV formatted files allows for further consolidation with any 3rd party reports analyzing software or Excel spreadsheet.

Now it is time to mention subsequent requests routed by dispatcher to balancing channels residing with different Iguana instances, sort of analogy to ‘persistence’ required from a load balancing system.

Examples can include query/response scenarios where a response is served by a Translator script, or scenarios of ‘don’t issue an outbound message unless a specific message already got processed’.

To satisfy this sort of persistence means that the involved balancing channels will require to read database tables shared among involved balancing channels. This is in order to share the complete ‘knowledge’ about current state of the enterprise.

Example:

This is our main module:

require ('queuemon')
require ('llp')

function main(Msg)

   local secondChannel= 'load balance 2' -- <your second channel name>
   local firstChannel = 'load balance 1' -- <your first channel name>
   local count = 3
   -- Setting the channel name and count explicitly
   local QC = queuemon.checkQueue{channel = firstChannel, count = count}  

   if QC then 
      send2Channel(secondChannel,Msg) -- use it to send to channel on remote machine
      push2queue(secondChannel,Msg)   -- use it to push to queue for channel on this machine
   else 
      push2queue(firstChannel,Msg)   -- use it to push to queue for channel on this machine
   end

end

function push2queue(channel,msg)
   --[[ use example from Wiki to see how we 
   serialize as JSON object; and in target channels source 
   component 'From Translator' filter messages by value 
   of first field in JSON object, respectively.

   if JSON objects 1st field has index/name of *this* channel 
   - then process this message, else ignore this message by 
   immediately executing 'return' in function main() of 
   target channel.
   ]]

   local q= toJSON(channel,msg)
   queue.push{data = q}
   iguana.logDebug(channel..' > '..msg)
end

function toJSON(channel,msg)
   local F = {}
   F.a1 = channel
   F.b1 = msg
   return json.serialize{data=F}
end

function send2Channel(channel,msg)
   --[[ use example from Wiki how to use 
   Translator as LLP Client in order 
   to send msg to IP and port of 
   channel 1 or channel 2 respectively.

   If you use IP=127.0.0.1 (i.e. localhost)
   then msg will be sent to another channel 
   on same (source) machine. 

   Please note that even if you have more 
   than one Iguana instance on a machine, 
   msg will be sent to channel respective 
   to specified port value of listening 
   channel.
   ]]
   iguana.logDebug(channel..' > '..msg)
end

and we will use a few shared modules:

Module iguanaconfig from ‘Monitoring Queue Size’ page, taken ‘as is’.

Module node extension:

-- Utilities for "node" Values

-- Coerce a node value into a string.
function node.S(ANode)
   return tostring(ANode)
end

function node:V()
   return self:nodeValue()
   end

Module llp from ‘Using Translator as LLP Client’ page, taken ‘as is’. Please note that you will have to use the example from this page to complete the project to your particular specifications.

A slightly modified module queuemon from ‘Monitoring Queue Size‘ page, see the modified code below.

require("iguanaconfig")

queuemon = {}

local function CheckChannel(Chan, Count, Name)
   if Chan.Name:nodeValue() == Name then
      local QC = tonumber(Chan.MessagesQueued:nodeValue())
      if QC > Count then
         iguana.logDebug('ALERT:\n Channel '..Name..' has '..QC..' messages queued.') 
         return true
      end   
   end
end

function queuemon.checkQueue(Param)
   -- We default to a queue count of 100
   if not Param then Param = {} end
   if not Param.count then Param.count = 100 end
   if not Param.channel then Param.channel= iguana.channelName() end
   local url = 'http://localhost:'..
    iguanaconfig.config().iguana_config.web_config.port..'/status.html'
   iguana.logDebug(tostring(url))
   -- We need a user login here.  Best to use a user with few
   -- permissions.
   local S = net.http.get{url=url, 
      parameters={UserName='admin',Password='password', Format='xml'}, 
      live=true}
   S = xml.parse{data=S}

   for i = 1, S.IguanaStatus:childCount('Channel') do
      local Chan = S.IguanaStatus:child("Channel", i)
      local QC = CheckChannel(Chan, Param.count, Param.channel)
      if QC then return QC end
   end
   return
end

For the impatient a complete project file is offered load_balancing.zip.

Create a listening channel  with a ‘From LLP Listener” Source and  ‘To Translator’ Destination and import the project into Filter Component script. Let’s call it ‘Balancing channel’.

Modify channel names and add/remove channels to balance as needed; along with adjusting other adjustable parameters like IP/port/user/password/etc…

Complete functions send2Channel() and push2queue() as applicable with your requirements.

If you balance channels on the local instance, then configure balanced channels to be ‘From Channel to …’ and to read messages from queue of ‘Balancing channel’.

If you balance using LLP over TCP, then the queue of ‘Balancing channel’ is expected to remain empty, since in the function send2Channel() we don’t push messages to the queue, to prevent them being piped to the Destination component.

Last comment: this example was created using Iguana version 5.5.1. It may possibly work with earlier versions as well, but it is good practice to run latest available version.