Tuning a large scale Iguana install

Optimal settings for log file sizes

Another area that we’ve seen problems is when a single Iguana server has too much aggregate data going through it. The problem here is that as the size of the log/queue files gets into the realms of gigabytes it can create some operational headaches. Too much data and you start to reach the limits of how much the operating system can flush out to disc. If you need to restart the server and it has to re-index the log files then it can take a long time. Another area is that log searches of massive logs can results in spikes in memory usage which in themselves can cause problems such as exhausting system memory.

The solution is simple – divide into smaller silos. Distribute your interfaces over multiple Iguana instances. This means less log/queue data per instance which means you don’t have all the problems that arise from log/queue data sizes getting too large on any single instance. Pay careful attention to underlying physical stores that the log files are stored on – you’ll have problems if you have them all flushing to a single physical disc.