This topic contains 21 replies, has 7 voices, and was last updated by  Scott Ripley 1 year, 8 months ago.

Tired of editing vmd files for table schema?

  • Then you will probably like this – it’s in Iguana 5.6.7. It’s one of the things we’re showing on the West Coast workshop tour:

    http://help.interfaceware.com/kb/introducing-database-schema-dbs-files

    There is fine work that has gone in this from a couple of C++ gurus Ram and Andrew did the work on this bringing it into action while Bret Dawson got the translator set up so it can edit these files.

    They are a little tricky to introduce into your project – you can either start with grabbing the example from the Iguana apps repository, add a file with a .dbs extension or get in and twiddle with the other directory in your user id sandbox.

    Anyways we’re looking for some feedback and happy to chat about how these things can be used.

    So, can we expect something similar for HL7 VMDs? Somewhat larger task I expect.

    Yes … we’re kind of just dipping our toe into the custom grammar pool to see if our user community likes the idea or runs away screaming πŸ™‚ So far some positive reception to the idea from in person discussions on the west coast tour.

    Table schemas are an easier target and the level of pain removed is higher since it really is an awkward workflow using Chameleon with the whole un-intuitive steps of dropping tables into a message’s table grammar and the whole pita that come from 64 bit windows having to have a 64 bit OBDC datasource for Iguana and 32 bit one for Chameleon (argh!).

    Custom grammars for HL7 could have some benefits – if one could define say a base 2.3.1 HL7 grammar and then ‘require’ it then extensions to that grammar like Z segments could be quite tersely defined as extensions off that base. Being able to diff such a grammar is attractive.

    However it may daunt some users. An alternative approach would be to just go and build out a GUI web editor for vmds files which would offer a different experience.

    This forum needs a LIKE button.

    Custom grammars for HL7 could have some benefits – if one could define say a base 2.3.1 HL7 grammar and then β€˜require’ it then extensions to that grammar like Z segments could be quite tersely defined as extensions off that base. Being able to diff such a grammar is attractive.

    Big tick.

    I really look forward to being able to define message grammar on-the-fly or at least completely within Translator. Translator significantly lifted the bar in integration development and vmd just seem so 2000’s. πŸ™‚

    I have to agree with Garry. Unfortunately, I had to miss your West Coast Tour, so I didn’t get to see this in action, but frankly anything that would eliminate VMDs and the use of Chameleon gets a thumbs-up from me.

    -Robert James,
    Interface Analyst,
    GetWellNetwork, Inc.

    Funny – it was your question in the forums about meeting Southern Californian users that kind of kicked off the whole west coast tour πŸ™‚ Next time! Anyone reading this forum can just go ahead right now with Iguana 5.6.7 and install the channel manager and try out these new DBS files in your ow

    It’s a given that any replacement for vmds that integrates similar functionality directly into the Iguana GUI will be an improvement. There are different styles we could go for though. We could re-create the Chameleon style of schema editing which is heavy on a traditional GUI or we could go more for a route of doing a free form grammar.

    There are pros and cons about both approaches.

    Currently the API is like this:

    function Blah()
    -- Creating a schema is somewhat expensive so typically you'd do this outside of your main function once only
    local DbSchema = dbs.tableLayoutFromFile{filename='todatabase/example.dbs'}
    local TableOut = DbSchema:getTableSet('ALL')
    end

    There is also a tableLayoutFromString function. I’d like to see this tweaked to:

    function Blah()
    -- This init function could also take an optional string argument instead of a file.
    local DbSchema = dbs.init{filename='todatabase/example.dbs'}
    local TableOut = DbSchema:tables{group='ALL'}
    end

    It’s an open question in my mind as to whether there is any value in the concept of having a group of tables. Personally I always just end up creating one group that has everything so I don’t seem much value in it. If one needed to create multiple groups of files we could just create multiple dbs files. It seems a good idea usually to scrub away any unnecessary features to get things down to the simplest feature set which meets the needs.

    Thoughts welcome.

    Will you do something special about foreign keys?

    As in turn them purple or something? So far these database schema have been used in a fairly simplistic manner to drive the db.merge function which does an insert or an update depending on whether the key columns are present.

    There is the Lua implementation of db.merge which could be altered to do more sophisticated things (eeekk (just thinking about testing and maintenance)). If the ideas were good it’s something people liked they could go into the core repository.

    It’s possible there may be benefit in expanding this schema information to hint at other properties for the schema like VARCHAR lengths. Tricky… since at some point you end up with a native presentation of SQL for each database type … at which point why not use native SQL.. πŸ™‚

    Speaking of native SQL, here’s how I implemented vmd-less database updates and overcame some limitations in the build-in db.merge function.

    It relies on the native SQL Merge statement – not available on all platforms unfortunately but I work mostly with MSSQL and Oracle. In essence, I create a table to hold all the data values with the names of the table elements matching the db field names. I then have a function that uses the table to build an SQL Merge statement that I can execute.

    require '_dbutil'
    
    function main(Data)
       msgIn = hl7.parse({vmd="Inbound from iPM.vmd", data=Data})
    
       local tIn = {}
       tIn.PatientID = msgIn.PID[3][1][1]:Q()
       tIn.Surname = msgIn.PID[5][1][1]:Q()
       tIn.Firstname = msgIn.PID[5][1][2]:Q()
       tIn.dob = 'to_date(' .. msgIn.PID[7]:Q() .. ", 'YYYYMMDD')"
       tIn.LocationID = '(select id from locations where loc_name = ' 
          .. msgIn.PV1[3][1]:Q() .. ')'
       local sSql = db.BuildMergeO('patientdetails', 'PatientID', tIn)
       
    end

    You’ll notice that I can include SQL functions for type transformations and SQL subqueries. I found this last capability has been enormously helpful when using relational data structures where pointers to reference tables are stored rather than the data itself.

    Sorry if I’m getting a bit off topic.

    Attachments:
    You must be logged in to view attached files.

    Yes, enforcing foreign keys would be the first step to creating a complete object-relational mapping.
    But maybe it is easier to incorporate an already existing ORM, like https://fperrad.github.io/lua-CoatPersistent/ (I don’t know if there are other Lua ORM libs, and whether it is possible to use them in Iguana)

    Robin:

    It’s probably possible to use it. It has native parts written in C it would require some re-jigging to get the linking redone to get the Lua runtime symbols from the Iguana.exe binary (which you’ve already played with). Looks like it’s a reasonable recent project but only one contributor.

    One other gotcha often with these binary lua modules is that are often not thread safe which is a headache in Iguana.

    Gary:

    Yes I have seen quite a few different variants of what people have done with the db merge module. At the time I saw them we didn’t have an easy means of making it possible for people to share the code – we’re gradually getting to that point where we have the means to get our user community connected – it’s a steady process of making it easier for people to make connections. Funny this week I managed to connect two different integration engineers with two different companies that were looking for people – so the connections are definitely growing…anyway I digress.

    One of the challenges with the database functionality we have is that we have two important requirements we have to meet:

    1. We have to cater to a lot of people who don’t live eat and breath in the translator and Iguana. There are lots of users we have that Iguana is a very small part of the role and so they don’t get deeply involved. Especially when we have prospective new users we have to really try and make it easy for these people.
    2. Once we release an API we really have to not mess with it. It just causes too much pain for people upgrading. We have well over 50 customers that all literally have hundreds of sites for whom the pain of re-writing existing code because we made a core API ‘better’ isn’t something they will thank us for!

    So we have to be very careful about what we put into our GA (generally available) releases. With the forums and the Iguana GIT repo etc. it’s a way for us to try and work more with the really technical and highly engaged people in our user community – allow them to connect with others of like mind and it’s good for getting visibility – if you ever in a position where you need to demonstrate your abilities nothing says it more than if people have already seen your contributions and the quality of your code.

    Anyways that’s a long winded way of encouraging you to share code and ideas and if something drops out of it which would be useable for our more casual less technical user base we’ll be happy to push it out there – or it may be best left as a resource for those who are more in the highly technical and engaged group.

    We have some interesting code to do with X12 that one of our users Austin Jones contributed – it’s in the queue to be reviewed before we try and merge into the core repo.

    Okay the APIs have been changed to a less verbose format which will be part of Iguana 5.6.8.

    You can see the change:

    https://github.com/interfaceware/iguana-web-apps/commit/f01018b27dffdf71022a422bc39551a6803aab44

    Turns out in the tables() function if you don’t specify a group of DBS files to load then you get all the tables which for most of the interfaces I have worked on with database channels works out well.

    Just thought of really useful way to leverage DBS files to take the self describing web service example to the next level:

    http://help.interfaceware.com/forums/topic/brain-teaser

    Curious to see if anyone is interested in trying this out as a model.

    Incidentally if this topic is of interest for avoiding using Chameleon, then this tool:

    http://help.interfaceware.com/kb/generic-z-segment-parser

    Is another thing to be aware of. Since it avoids the need to parse Z segments with vmds it reduces the amount of time one has to spend inside Chameleon editing vmd files.

    Eliot, you make it sound as though time spent in Chameleon isn’t quality time πŸ˜€

    While the 32 vs 64 bit ODBC DSN issue is a bit annoying and the need to provide a link between a message grammar and a database table seems non-intuitive, Chameleon is still much faster when it’s necessary to template lots of databases/tables.

    I very much look forward to the day that “Iguana DBStudio” is available in the Iguana GUI, and customers can create, edit and update dbs files without having to understand the grammar. And of course, functionality that would build a set of tables based on one or more dbs files would be both complementary and very useful πŸ™‚

    Jeff Drumm β—Š VP and COO β—Š HICG, LLC. β—Š http://www.hicgrp.com

    I think the people for whom Chameleon grates the most are those that are not running windows on their machines – i.e. a lot of companies are using MacBooks these days.

    I guess the a useful mix DBS where one can swap between an editor GUI view and the underlying grammar format…

    I’m starting a new project (fairly small) and I’m going to try to use this method for creating/maintaining my tables. This may be a really stupid question, but I cannot figure out how to flag fields as key fields. I don’t see any keys in example file and I’ve tried about 50 ways of doing it but I get a parser error every time. Am I just blind and missing it in the example or is there some voodoo for keys which I have not yet discovered?

    M.R. McBee

    Hi Ryan,

    Opps – the example did omit that didn’t it. Sorry. I did a bit of experimentation and looked at the grammar file for DBS, and this works:

    create table [Bill] –Billing Information
    (
    [id] integer,
    [Charge_Type] string,
    [AccountID] string,
    key(id)
    );

    See the key definition at the end?

    I checked with the isKey() method on the resulting schema object and that makes it a key.

    Thanks Eliot

    M.R. McBee

    additionally a multi-column key might be specified as follows:

    create table [Bill] –Billing Information
    (
    [id] integer,
    [Charge_Type] string,
    [AccountID] string,
    key([id], [Charge_Type], [AccountID])
    );

You must be logged in to reply to this topic.