Optimizing many lots of real time HL7 Feeds into a SQL database

This is quite a common configuration for a vendor.

We have a N customers feeding real time HL7 feeds into SQL databases.  Each customer has their own separate logical database but those databases are served on shared database servers with some shared hardware resources.

We have lots and lots of customers that have this configuration.  A question from a client came up about this type of configuration at our user conference and so I thought I would jot some good things to think about in this type of scenario.

Now a typical production issue that might arise in this type of configuration is if one of the customers spikes the throughput they have going into the data center. A typical reason for this might be a system outage on their part or problem that requires them to replay say 5 days worth of data which results a needing to process a large volume of messages.

The problem with this is that it can result in that one channel hammering the database server which because the physical disk and database server resources are shared, slows down the performance of all the other customers. It’s almost like a denial of service attack.

There are a number of ways to improve the configuration for this type of scenario. One thing to do is to leverage the apis available in Iguana to detect when a queue is very large and slow down the Lua code using util.sleep. Each translator runs on an independent thread so the impact of this is merely to slow down the translator in question.

This little code snippet shows how you can use iguana.status to make a quick helper function to find out the current queue depth for a channel:

function MyQueueDepth()
   local StatInfo = xml.parse{data=iguana.status()}.IguanaStatus
   local QueueDepth = 0
   local MyGuid = iguana.channelGuid()
   for i=1, tonumber(StatInfo.NumberOfChannels:S()) do
      if StatInfo:child("Channel", i).Guid:S() == MyGuid then
         return tonumber(StatInfo:child("Channel", i).MessagesQueued:S())
      end
   end 
end

Now another way to approach this problem is to optimize the database interaction. It’s possible to insert data faster into a database by batching the data. Have a good read over an article I wrote about pipelining.

Another thing that may be worth looking at optimizing is taking a deeper look at how insertions of data from Iguana is happening. In most cases db.merge is good enough for most interfaces but it may be worthwhile seeing if you can tweak things using the Lua version of db.merge.

With optimization is always good to keep an open mind and approach things with a scientific mindset. If you have an idea for how to improve performance, try and figure out a way to test the validity of your idea and be prepared to revisit your assumptions.

Leave A Comment?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.