Applying the Translator to Odd Formats

Mirroring a Fogbugz Wiki

This is a very interesting application of the Translator that falls completely outside of the traditional realm of interfacing.

This wiki is implemented using a commercial product called Fogbugz which is an excellent bug tracking system which has a nice wiki combined with it. However we use Iguana to programmatically extract the data from that wiki and reformat in the manner you see now.

There is a lot to love about the Fogbugz wiki as a documentation solution. It has a very intuitive interface and it is simple to jump in and be productive with it. Unlike most wikis the Fogbugz wiki has a great concept of having a page hierarchy which is allows us to structure the documentation in a similar fashion to our old manual. Page editing is WYSIWYG, it feels just like using Word in the context of a browser.

But there are things that were not ideal that our old manual technology already offered such as:

  1. Serving pages quickly because all the final generated output was static HTML. We have had reports from customers in some parts of the world like India and Singapore that the Fogbugz wiki had quite slow performance in their areas.
  2. It’s hard to proxy the Fogbugz wiki to our own internet domain name, to use the URL it requires a few roundtrips to redirect to the Fogbugz URL which was
  3. Hard coded links to crept into the wiki which were difficult to find and remove.
  4. If you were looking at then logging and editing the wiki had an awkward work flow.
  5. We wanted back the tree navigation on the left hand side of the page the Fogbugz wiki:
    • Only showed some of the related page siblings
    • Sibling pages were always given in alphabetical order rather than giving us control over the order, it was a small but very significant usability issue.
  6. The Fogbugz wiki didn’t give us next and previous buttons, these are very useful for customers going through tutorials.
  7. It’s hard to get precise control over the complete look and feel of the wiki. We did our best but the Fogbugz skinning system limited what we could do.

Overall we like Fogbugz a lot better than our original home grown documentation system. But the above issues were significant for our customers, having good available documentation is a big differentiator for us and so these issues are very important to fix.

So we decided to use Iguana to fill the gap and it’s turned out to be a beautiful solution. It’s very easy to use Iguana to scrape the Fogbugz wiki to extract our document data and render a static HTML version of the site with complete control over all the formatting.

What you are reading right now is the content generated by Iguana from programatically extracting data from the Fogbugz wiki. This attached image shows what the Original Format.png looked like.

Curious to see of how much interest this is for people, let us know.

Source Code

Dmitri kindly made this code available. Obviously this is a starting point to play with which will require customization to fit your own environment. If you are interested in using the code and discussing it then our Linked In forum group is a great place to come to do that. Please contact our support if you would like to become part of that group. Processes messages by fetching pages from source FogBugz instance, parsing pages, computing hierarchy and generating output HTML Periodically (on a schedule) produces messages indicating GenerateHandler to regenerate the data, and sometimes completely retransfer pages. Caches data from Recently Changed pages, generates output pages Handles HTTP request indicating to update a particular page immediately (called by URLTrigger plugin from fogbugz whenever a user edits and saves a page). Handles the “Edit” link redirect, protecting it with a password, to conceal the original instance from getting visited by search engines etc.

Dmitri would like to make some site-specific customized static “skinning” files, to polish off the appearance, animate tree menu etc., a generic version might be coming up later.

How it works

GenerateHandler contains the core caching logic, it accesses the FogBugz instance via HTTP/XML API or by fetching the pages like a normal browser, parses the pages, works out the hierarchy tree, and finally generates and writes output HTML files.

It gets messages (effectively instructions, commands) from various other channels, which tell it to do various tasks, for example:

  • Re-transfer a particular page:
    • FetchPageId=123 -- re-fetch page 123 and regenerate
  • Or re-transfer all pages:
    • FetchPageId=*
  • regenerate using data already fetched and stored locally, if available (useful for testing generation)
    • FetchPageId=#

Leave A Comment?