This topic contains 7 replies, has 3 voices, and was last updated by  Eliot Muir 8 years, 9 months ago.

Clarification

  • Im struggling a bit on the higher level as to what is the main difference between a Channel and Translator and when you would use either. My initial understanding is that a translator is where the scripting lives. A channel, is a bit more confusing to me.

    My thoughts going forward are this:
    I have written a custome application in Xojo (formally RealBasic) that reads database tables with some basic information that I use to generate data that is inserted into HL7 database tables (as defined by the .VMD), then that channel is a disk based output that is then sent to CERNER as a disk file interface. So my question is, how would I implement this in the ‘New world’? Would I have a translator => File setup?

    Each channel consists of three parts – a source component, possibly a filter and a destination component.

    There are five places you can see translator instances which could go into a channel:

    1. LLP Listener components can optionally have a translator instance for processing ACKs with custom logic
    2. There are From Translator components which are good for polling stuff like files, databases, polling web services etc.
    3. There is the From HTTPS channel type which can act as a web service or even host a mini application like we do with the Iguana apps.
    4. Filters can have translator components. A good pattern for generating data is to use a From Translator to generate small amounts of data like ‘id’s which are queued and then have a Filter component flesh out the body of the data. This works really well with the way that the translator is orientated around sample data.
    5. And last of all there is a To Translator component.

    For this problem I would suggest a From Translator, a Filter and you have a toss up between a To File or a To Translator component. It’s possible to write files using the translator – slightly more work but more flexibility.

    Happy to enlarge on that a little later – have to go!

    What role does the filter play in your recomended solution?. Would I not just go from Translator to file?. ie) What value added does the filter provide?

    Filter gives user a chance to do additional computing in between Source to Destination components. Handy in some scenario. Filter can safely remain ‘not used’, if you don’t need it.

    Ok, so in my proposed solution, I create a Translator, write my data load code, Destination is to a file. So I start my channel, it runs, creates the destination file, then I assume I somehow have to stop the channel otherwise it will invoke my data generation script again, and again..Lev and I discussed this yesterday. So I would have a dummy table with a column that I put a value in a column, so when Iguana attempts to execute the script again, it wont, or I can programatically shut down the channel after first script invocation (last line in script?)

    This link might give some good ideas:

    http://help.interfaceware.com/kb/105

    I’ve been trying to find a good article I wrote about the general concept. But in general you want to make the code in the From Translator as simple as possible and have it push data into the queue.

    Then the filter can do most of work in transformation. This works a lot better since it allows one to work with sample data loaded from the logs into the sample data. You transform the data there and then push it into the queue which is mocked out operation within the translator IDE.

    Reading sample data and pushing things into the queue is a non-destruction operation – i.e. doesn’t change the state of the environment – it is important to do things this way to make the live execution environment of the translator work.

    Probably not explaining myself very well but it’s a really key concept to grok and things make sense with the translator.

    Our documentation got majorly re-arranged about six months ago with this new help system and so I am finding it a little hard to find some of best material to explain this.

    My situation is a bit more involved. I have fifteen or so custom tables with addresses, names, encounter types, results, birth dates, rooms and beds, it is highly configurable. Then what I do is create a HL7 message based on the VMD and the static data tables. As I mentioned, I currently have a highly customized application that does this now. It is really a data generator I would like to create within the Iguana environment. Does the same approach apply? ie) From Translator to File?

    There must be some way to create a queue of IDs of some sort that you can use to base the queries off that flesh out the data. FYI – on a plane all tomorrow so unable to comment until probably next week.

You must be logged in to reply to this topic.