This topic contains 0 replies, has 1 voice, and was last updated by  lev 4 years, 11 months ago.

Load-balance Iguana inbound channels with Iguana 5 Translator

  • This is little example how Iguana channel can be utilized to create a load-balancer.

    This can balance between any number of inbound channels not in necessary belonging to same Iguana instance.
    Please note we balance not Iguana instances, but Iguana channels as our atomic operands, and these channels can belong to various Iguana instances.

    Approach is based on example ‘Monitoring Queue Size‘.

    If you aren’t interested in principal discussion about various aspects of load balancing as it is applicable with Iguana 5 Translator then skip down to sample code.
    Discussion:
    Iguana 5 Translator can load balance listening inbound channels.

    Let’s see first what balancing is: there are numerous scenarios, but one basic scenario says – engage next processor when processors in use become saturated, and take it back off when load goes down.

    In other words load balancing is a process of increasing reliability through processors redundancy.

    Outbound Iguana channels balancing cannot be covered by offered example, however it is same approach, to take advantage of, should we wish to build one.

    Let’s establish some jargon. Let’s call Iguana channel that takes initial workload and needs extra help a ‘balanced’ channel.
    Iguana channels which kick in to help a balanced channel we will call ‘balancing’ channels.
    Iguana channel orchestrating operations will be ‘dispatcher’ channel.

    Please note: we balance not Iguana instances but Iguana channels as our atomic operands; these channels can belong to various Iguana instances.

    Below example assumes that first balanced channel is on localhost, while 2nd, 3rd, to Nth channels located on any host. However, it can be adjusted to have 1st channel on any host only principal example becomes too complex in my taste.

    One may oppose location of dispatcher channel on same host where balanced channel resides, but it is only mater of your own implementation – dispatcher channel can be on any machine (think of potential availability scenario).
    Confirmation for above claim can be easily noticed from sample code; just replace localhost reference with some other IP, in function where dispatcher code reads queue levels.

    Additional directions this example can potentially be enhanced into include ’round robin’ , scheduling, errors count for specific balancing channel, disk free, or count of already assigned requests. But again, it will look too complex for proof-of-concept example.

    Few words about logs. This may look like a drawback for brief moment, our daily log records become distributed de facto.

    Daily information may get split among multiple Iguana instances, but not to forget that Global Dashboard makes listing between instances a breeze, and one-click ability to export any Iguana instance logs into CSV formatted file allows for further consolidation with any 3rd party reports analyzing software or Excel spreadsheet.

    Now it is time to mention subsequent requests routed by dispatcher to balancing channels residing with different Iguana instances, sort of analogy to ‘persistence’ required from load balancing system.

    Examples can include query/response scenarios where response served by Translator script, or scenarios of ‘don’t issue outbound message unless specific message already got processed’.

    To satisfy this sort of persistence, involved balancing channels will require to read database tables shared among involved balancing channels. This in order to share complete ‘knowledge’ about current state of enterprise.
    Example:
    This is our main module:

    require ('queuemon')
    require ('llp')
    local function trace(a,b,c,d) return end
     
    function main(Msg)
       
       local secondChannel= 'load balance 2' -- 
       local firstChannel = 'load balance 1' -- 
       local count = 3
       -- Setting the channel name and count explicitly
       local QC = queuemon.checkQueue{channel = firstChannel, count = count}  
       
       if QC then 
          send2Channel(secondChannel,Msg) -- use it to send to channel on remote machine
          push2queue(secondChannel,Msg)   -- use it to push to queue for channel on this machine
       else 
          push2queue(firstChannel,Msg)   -- use it to push to queue for channel on this machine
       end
       
    end
     
     
    function push2queue(channel,msg)
       --[[ use example from Wiki to see how we 
       serialize as JSON object; and in target channels source 
       component 'From Translator' filter messages by value 
       of first field in JSON object, respectively.
       
       if JSON objects 1st field has index/name of *this* channel 
       - then process this message, else ignore this message by 
       immediately executing 'return' in function main() of 
       target channel.
       ]]
       
       local q= toJSON(channel,msg)
       queue.push{data = q}
       iguana.logDebug(channel..' > '..msg)
    end
     
     
    function toJSON(channel,msg)
       local F = {}
       F.a1 = channel
       F.b1 = msg
       return json.serialize{data=F}
    end
     
     
    function send2Channel(channel,msg)
       --[[ use example from Wiki how to use 
       Translator as LLP Client in order 
       to send msg to IP and port of 
       channel 1 or channel 2 respectively.
       
       If you use IP=127.0.0.1 (i.e. localhost)
       then msg will be sent to another channel 
       on same (source) machine. 
       
       Please note that even if you have more 
       than one Iguana instance on a machine, 
       msg will be sent to channel respective 
       to specified port value of listening 
       channel.
       ]]
       iguana.logDebug(channel..' > '..msg)
    end

    and we will use few shared modules:

    Module iguanaconfig from ‘Monitoring Queue Size’ page, taken ‘as is’.

    Module node extension:

    -- Utilities for "node" Values
     
    -- Coerce a node value into a string.
    function node.S(ANode)
       return tostring(ANode)
    end
     
    function node:V()
       return self:nodeValue()
       end

    Module llp from ‘Using Translator as LLP Client’ page, taken ‘as is’. Please note that you will have to use example from this page to complete project to your particular specifications.

    A little modified module queuemon from ‘Monitoring Queue Size‘ page, see modified code below.

    require("node")
    require("iguanaconfig")
     
    queuemon = {}
     
    local function CheckChannel(Chan, Count, Name)
       if Chan.Name:nodeValue() == Name then
          local QC = tonumber(Chan.MessagesQueued:nodeValue())
          if QC > Count then
             iguana.logDebug('ALERT:\n Channel '..Name..' has '..QC..' messages queued.') 
             return true
          end   
       end
    end
     
    function queuemon.checkQueue(Param)
       -- We default to a queue count of 100
       if not Param then Param = {} end
       if not Param.count then Param.count = 100 end
       if not Param.channel then Param.channel= iguana.channelName() end
       local url = 'http://localhost:'..
        iguanaconfig.config().iguana_config.web_config.port..'/status.html'
       iguana.logDebug(tostring(url))
       -- We need a user login here.  Best to use a user with few
       -- permissions.
       local S = net.http.get{url=url, 
          parameters={UserName='admin',Password='password', Format='xml'}, 
          live=true}
       S = xml.parse{data=S}
     
       for i = 1, S.IguanaStatus:childCount('Channel') do
          local Chan = S.IguanaStatus:child("Channel", i)
          local QC = CheckChannel(Chan, Param.count, Param.channel)
          if QC then return QC end
       end
       return
    end

    For impatient a complete project file offered load_balancing.zip.

    Create listening channel ‘From LLP Listener to Translator’ and import suggested project into Filter script. Let’s call it balancing channel.
    Modify channel names and add/remove channels to balance as needed; along with adjusting other adjustable parameters like IP/port/user/password/etc…

    Complete functions send2Channel() and push2queue() as applicable with your requirements.
    If you balance channels on local instance, then configure balanced channels to be ‘From Channel to …’ and to read messages from queue of balancing channel.

    If you balance using LLP over TCP, then queue of balancing channel is expected to remain empty, since in function send2Channel() we don’t push message to queue, for not to be piped to Destination component.

    Last comment: this example was created using Iguana version 5.5.1. May possibly work with earlier versions as well, but it is good practice to run latest available version.

You must be logged in to reply to this topic.