This topic contains 15 replies, has 3 voices, and was last updated by  Jeff Drumm 3 years, 4 months ago.

PIPE to JSON

  • All,

    This is my inaugural post. I tried to avoid posting by searching for the answer first. A link to samples would be appreciated if it exists. I have a delimited file:

    First|Last|City|State
    John|Doe|Nashville|TN

    I’d like to end up with a JSON Object:

    { 
    "First" : "John",
    "Last" : "Doe",
    "City" : "Nashville",
    "State" : "TN"
    }

    I feel pretty confident about being able to output the JSON, but I’m stumped on the code necessary to set the delimiter and reference the elements in the PIPE delimited file. A bit of sample code that parses the line would get me on track.

    Cheers!

    Here’s a bit of code that should help with the parsing of the delimited data:

    function main(Data)
       -- Create a table with one record per element
       local recs = Data:split('\n')
       -- First row contains element names
       local keys = recs[1]:split('|')
       -- Create the table for the JSON object
       local obj  = {}
       -- Loop through the records, starting after the header
       for r=2, #recs do
          -- For each record, split its values into a table
          local vals = recs[r]:split('|')
          -- Loop through the fields and assign each to a key
          for k=1, #keys do
             obj[keys[k]] = vals[k]
          end
          -- obj is serialized as JSON into J
          local J = json.serialize{data=obj}
          -- Do something with J
       end
    end
    

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    Hmm. Looks like the tags to support Lua syntax highlighting are broken, and I guess I’ve reached the edit limit for the previous post. Sorry, it was a lot more readable before I decided to “enhance” it. I’ll just use code tags:

    function main(Data)
       -- Create a table with one record per element
       local recs = Data:split('\n')
       -- First row contains element names
       local keys = recs[1]:split('|')
       -- Create the table for the JSON object
       local obj  = {}
       -- Loop through the records, starting after the header
       for r=2, #recs do
          -- For each record, split its values into a table
          local vals = recs[r]:split('|')
          -- Loop through the fields and assign each to a key
          for k=1, #keys do
             obj[keys[k]] = vals[k]
          end
          -- obj is serialized as JSON into J
          local J = json.serialize{data=obj}
          -- Do something with J
       end
    end
    

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    Thanks for your help. I’m in the process of building the test translator. I’m just building it to test the functionality for now.

    I think I’m missing some basics. And once again forgive me for not understanding the flow:

    Source: This specifies the place from which we are getting the data and it delivers the data to the filter.

    Filter: This performs any manipulation of the data

    Destination: This specifies where the data should be delivered to. This is where I’m a little lost. [ Configure another channel using a ‘From Channel’ source to use as a destination for this channel.] I guess I don’t know what a ‘From’ Channel is or how to set it up. For some reason I feel like I should be able to configure a ‘To Channel’ at this point.

    This is very helpful. It appears that I may be missing the module split(). I searched for it in the knowledge base and did not find it. Can you direct me to it so I can install it?

    ERROR (dismiss)
    attempt to call method 'split' (a nil value)

    I think I’m missing some basics. And once again forgive me for not understanding the flow:
    Source: This specifies the place from which we are getting the data and it delivers the data to the filter.

    Correct. You can see the various data reception mechanisms in the drop down. Note that for some interfaces, notably Database connections, a Translator is the preferred Source (the Database source is deprecated).

    Filter: This performs any manipulation of the data

    The Filter is used for transformation and filtering; it’s basically another translator in the route between the Source and Destination. Its use is optional though; some systems simply require a pass-through.

    Since you’re interested in translating to JSON, I would suspect you’re intending to call a RESTful web service. That might mean you would use “To HTTPS” or “To Translator” as a destination, since you can invoke web services from either.

    Destination: This specifies where the data should be delivered to. This is where I’m a little lost. [ Configure another channel using a ‘From Channel’ source to use as a destination for this channel.] I guess I don’t know what a ‘From’ Channel is or how to set it up. For some reason I feel like I should be able to configure a ‘To Channel’ at this point.

    The purpose of the “From Channel” Source is to allow you to receive data from a single source to multiple destinations. This is fairly common in the healthcare world; one system needs to send Admit/Discharge/Transfer data to some number of downstream systems, and that can be handled in Iguana by creating an “ADT” channel with “LLP Listener” as the source and “LLP Client” as the destination (this gets downstream system #1 connected). Additional interfaces can then be added for additional downstream systems simply by adding channels with “From Channel” as the Source, that specify the “ADT Channel” as that source and “LLP Client” as the destination. Each channel can manipulate the data as needed in the Filter.

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    This is very helpful. It appears that I may be missing the module split(). I searched for it in the knowledge base and did not find it. Can you direct me to it so I can install it?
    ERROR (dismiss)<br> attempt to call method 'split' (a nil value)

    I should have included the entire main project in the code I provided. Adding the lines:

    require 'node'
    require 'stringutil'
    

    Will help 🙂

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    Yes, that made a significant difference!

    The code as written is very rudimentary; it doesn’t do anything special to handle empty rows, for example.

    It also assumes that you’re processing data in a tabular format, where the first row contains the field identifiers and subsequent rows contain [b]one or more[/b] records of actual data. This may not actually be the format of your data, so beware.

    Example:

    First|Last|City|State
    John|Doe|Nashville|TN
    Jane|Smith|Louisville|KY
    Bob|Jones|Melville|NY
    Gary|Sampson|Danbury|CT
    

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    OK,

    I’m making real progress now. I’ve got the PIPE side working great. It processes reliably and looks like this on the Channel config. Let’s call it Channel A.

    Channel A: FILE -> QUE

    As you suggested, I made a second channel, Channel B, with a type of “From Channel” and chose the source as “Channel A”. I then gave it a destination as a file in another directory for testing purposes.

    Channel B: QUE -> FILE

    Then I activated Channel B. Then I activated Channel A. For some reason, Channel A is working great but it’s not sending the JSON result through to Channel B. Thanks for your help.

    Just a guess without seeing your code, but since you’re doing your transformation on the inbound channel, I’ll bet you don’t have a call to queue.push{} to actually stuff your JSON into the queue. For example, if your serialized JSON is in variable Out, the syntax would be:

    queue.push{data=Out}
    

    Hope that helps.

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    Also, you could accomplish the same thing by having a channel with “From File” as the source and “To File” as the destination, with your Pipe -> JSON translation running in the filter. If you ever need to send that JSON somewhere else, you could simply add a channel with “From Channel” as the source (and selecting your existing channel in its configuration), and “To <whatever>” as the Destination.

    Don’t waste channels 🙂

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

    Alright, I’ve consolidated it to a single File In / File Out channel. My script looks like this:


    require 'node'
    require 'stringutil'
    -- The main function is the first function called from Iguana.
    -- The Data argument will contain the message to be processed.
    function main(Data)
    -- Create a table with one record per element
    local recs = Data:split('\n')
    -- First row contains element names
    local keys = recs[1]:split('|')
    -- Create the table for the JSON object
    local obj = {}
    -- Loop through the records, starting after the header
    for r=2, #recs do
    -- For each record, split its values into a table
    local vals = recs[r]:split('|')
    -- Loop through the fields and assign each to a key
    for k=1, #keys do
    obj[keys[k]] = vals[k]
    end
    -- obj is serialized as JSON into J
    local J = json.serialize{data=obj}
    -- Do something with J
    queue.push{data=J}
    end
    end

    My output looks like this:


    {
    "State": "TN",
    "Last": "Butler",
    "City": "Franklin",
    "Zip": "37064",
    "First": "Bill"
    }
    {
    "State": "TN",
    "Last": "Conner",
    "City": "Nashville",
    "Zip": "37203",
    "First": "Thomas"
    }
    {
    "First": ""
    }

    Notice the extra First tag at the bottom. I’m not clear why. The source file looks like this and does not have an extra \n at the end.


    First|Last|City|State|Zip
    Bill|Butler|Franklin|TN|37064
    Thomas|Conner|Nashville|TN|37203

    I really appreciate your help on this and hope this thread will help others. I’ve been looking at a lot of different products and it seems like Mirth is the only other one that comes close, but I really like the scripting engine on your product. It’s a huge time saver.

    Yes, I got the same result. It seems to happen on the last iteration of the outer loop and the record is handled as though it was an empty line. So I simply added a conditional around the key/value assignment:

          if recs[r] ~= '' then
             local vals = recs[r]:split('|')
             -- Associate keys (field names)
             -- with values
             for v=1, #vals do
                obj[keys[v]] = vals[v]
             end
          end
    

    You’ll also notice that I’m now iterating over the values rather than the keys (field names); that way if a record is short some number of values it won’t cause the script to halt.

    PS. It’s not ‘my’ product, per se. I work for HICG, a company that provides professional services for healthcare integration, and we very much enjoy working with the folks at Interfaceware.

    Jeff Drumm ◊ VP and COO ◊ HICG, LLC. ◊ http://www.hicgrp.com

You must be logged in to reply to this topic.