Thursday, September 18, 2008

Retriggerable First Call

Over the years, I've grown to like the First Call? primitive on the Synchronization palette. It's a simple, useful tool if you want to run a subVI, or a section of a block diagram in a case structure, only once when the VI first starts up. I find it particularly useful for initializing the behavior of modules that I've implemented as functional globals. Recently, though, I found that several of the modules and applications I've been working on needed the ability to do what I call a "soft restart." That is, they need to be able to be returned almost completely to their start-up conditions, without completely stopping and restarting the applications. This is particularly true for my RT applications, which, lacking a GUI, can't easily and directly be stopped and restarted. Herein lies the quandary: First Call? returns true only once, but to perform a soft restart we need the ability to effectively reset it so it that returns true again, on demand. The solution: Bob's handy-dandy Retriggerable First Call routines.

First Call Retriggerable.vi













Used in a manner similar to the First Call primitive, this reentrant VI will return a TRUE the first time that it is called. It also will obtain a reference to a named Notifier (and keep said reference on an uninitialized shift register) which is used to accept the retriggering notification. There is an accessory VI, called Retrigger First Call Nodes, which when called will cause all instances of this VI to again return TRUE the first time they are called after the Retrigger VI is invoked. In this manner, we can use this VI for initialization functions, while retaining the ability to restart processes if necessary.

Retrigger First Call Nodes.vi


















Pretty straight-forward: obtain the Notifier reference on first call, keep it on a shift register, and send notification every time the VI is called. Unless, of course, an error is passed in, in which case do nothing.

For completeness and cleanliness, I suppose there should also be a third component to this utility: a clean-up routine to destroy the notifier reference. Also, looking again a little more critically at the Retrigger First Call Nodes.vi, we could make the diagram a tad simpler. What we really want is to get rid of the error-handling case structure, and instead wire the error in and out cluster through the Send Notification. As my professors in college used to say, "I will leave this as an exercise for the interested reader."

Monday, September 8, 2008

Put Error Handling in Every SubVI

Picture yourself in this all too common scenario. You've been developing a glorious application for weeks or months, and you're near the end. It's your masterpiece, a real showstopper, your veritable magnum opus of creativity and cleverness. You've gotten to the detailed testing stage, and you're confident that everything will come together smoothly. The application will run flawlessly and you'll be a hero in the eyes of your colleagues and customers. Under budget, ahead of schedule, with a sexy GUI and clean block diagram. Thanks to LabVIEW, you've been able to test each and every one of your functional modules as you've developed them, and stand-alone, each one is bug free. The sun is shining, the flowers are blooming, and life is great.

Then you start running the integrated application. Suddenly, things don't work so well. Routines that you thought were completely debugged are throwing errors you've never seen before. Or worse... nothing is causing an error, but your test inputs are not producing the correct results. Things are behaving in a weird and unpredictable manner. Testing is going poorly and taking far more time than you had budgeted. Your customer is demanding to know when you'll be finished and your answers are growing vague. You can see your schedule leeway rapidly evaporating and you're losing confidence in your ability to deliver. You haven't seen your wife and kids in days, the dark clouds are closing in around you, and life sucks.

How many of us have faced this looming disaster with fear and trepidation? What could you have done to reduce the anxiety and make testing at least a little bit more predictable? I won't suggest that there is one single, silver-bullet solution that will magically convert your software dung beetles into amethyst scarabs. However, there is one really simple discipline that will make your job of isolating bugs far simpler: put error handling into every single subVI that you write.

Sounds too simple? Not too simple; just simple enough to be easy and very useful. Let's review first the easiest way to approach this.

The most basic subVI error handling consists of a case statement enclosing all of your functional code in each module, with the input error cluster wired to the selector terminal. The error case will execute nothing, and merely pass through the error cluster to the output. In the non-error case, where your actual code resides, you wire the error cluster through your code as possible, picking up all elements that handle errors (familiar examples include file I/O and synchronization functions). In this manner, if the VI kicks an error, it passes it out to the next VI in line. For any VI, input errors inhibit any further processing.
The most obvious result is that the source of any error produced will be clear in any chain of subVI's. Put a probe on the output error clusters in a successive chain, run your code, and magically, the source of the error becomes painfully apparent. Errors can be isolated easily and unambiguously. For no other reason, this simple approach is worth its weight in gold.

With error clusters in and out of every subVI, you can enforce data flow dependency that might otherwise be difficult or impossible. With this, you can ensure the steps through which your program flows. Results? You know exactly what executes when. You dramatically reduce the possibilities of timing ambiguity or race conditions. You eliminate the need for artificial ways to guarantee program execution order, such as sequence structures. And you can probe or breakpoint all intermediate values, step-by-step, from one VI to the next.

As a useful side-effect, putting error in and out clusters on all of your subVI's also helps to standardize the icon/connector panes of your work. Many authors have advocated picking a single pattern and using it on all of your work; popular ones include the 4-2-2-4 and 5-3-3-5 terminal layouts.

There are certainly more sophisticated approaches to error handling than what I've presented here. Peter Blume devotes an entire chapter to the subject in his book, for example. The point of this article is this: just start doing it. You'll see the benefits immediately. First, get into the habit of including at least the most basic error handling in your subVI's. Then you can start to get fancy.

The scenario described in my opening paragraphs is one that I've either witnessed or lived through too many times. This approach is so easy to do, and such a powerful tool to help make your code more robust and easier to debug, that there's no excuse not to do it.

Tuesday, August 5, 2008

U64 Millisecond Tick Count Utility

We all know and love the built-in tick count (mSec) function in LabVIEW. It unfortunately has two intrinsic limitations which, at one point or another, we've all probably encountered. The first is that its output is a U32, which means that it wraps after 2^32 - 1 mSec, or roughly every 49.7 days. The second difficulty is that we don't know when the wrap will happen; it could be tomorrow, or seven weeks from today. The value of the millisecond tick doesn't seem to be tied to any external temporal reference. Well, I've got a solution that at least tackles the first issue... it's the U64 Tick Count.vi shown below.




Its operation is pretty simple and self-explanatory. It's a functional global, in the sense of using uninitialized shift registers in order to maintain state data between calls. When the U32 time function wraps around, 2^32 is added to a running offset count, and the VI has a capability to properly initialize things on its first call. Since I use this in both Windows and RT applications, there's a Conditional Disable structure to provide the appropriate mSec tick function for each. (I'd like to post the VI itself for download, but I haven't yet figured out how to do this on Blogspot. If any of you intrepid readers could fill me in, I'd be grateful.)

With a U64 output, this millisecond timer will wrap approximately every 584 million years. I would guess that this should be sufficient for most applications.

Thursday, July 10, 2008

The Importance of Style in LabVIEW Programming

Earlier this week, my wife, son, and I visited the teaching greenhouse at UConn, to see a specimen of a Sumatran Corpse Flower (amorphophallus titanum) in bloom. Relatively rare in captivity, these plants are the rock stars of the horticultural world, producing a single flower six feet tall which lasts less than a day. They also smell like a putrid rotting cadaver, adding to the fascination. What does this all have to do with LabVIEW, you ask? Read on and you'll see.


Over the years, much has been written and discussed regarding the importance of good programming style. Several years ago, the (now defunct) LTR newsletter had an excellent article entitled "Rules to Wire By"; it's still relevant, and you can read it here. An authoritative book on the subject, The LabVIEW Style Guide, was published last year by my former boss, Peter Blume. It's an exhaustive compilation of rules, recommendations, examples, and illustrations of good and bad LabVIEW style. It covers a lot of ground and can serve as a great reference for those seeking to perfect this aspect of their LabVIEW programming skills. This book grew out of a presentation Peter gave at NIWeek 2002, which you can download here. The terrific books by Gary Johnson, Jeff Travis, Jim Kring, Jon Conway, and others, further strengthen the LabVIEW references available to experienced programmers and neophytes alike. With these and all of the myriad other resources available, you'd think that there would be no excuse for some of the poor code which we've all seen in our careers.

The trouble is, style is not enough.

Recently, I witnessed a project that went from bad to worse. The original programmer produced an application that didn't look all that good, and functioned poorly. The project was re-assigned to another engineer in the company, a CLA at that. She refactored the code according to the best style standards available, but didn't pay enough attention to the underlying program's structure, and as a result its performance didn't meet the customer's requirements. Re-assigned yet again, to yet another CLA, nothing really improved. The code got prettier and prettier with each iteration, but no one addressed the fundamental design issues which prevented the program from meeting its goals.

LabVIEW is an easy language to learn and easy to start programming. It's also the easiest language to program poorly, and unlike text-based languages, your code will still run -- maybe even run fairly well. LabVIEW seems to cast a spell over even experienced programmers in other languages who seem to forget everything they used to know. People jump right into coding without giving any thought to program design. The old standard concepts of structured programming, hierarchical organization, abstraction, modularity, of loose coupling and tight cohesion... gone in a flash. By paying attention to style, your poorly-written code can be made to look marvelous. Personally, I'd much rather deal with a well structured application that happens to have crooked wires and overlapping block diagram objects, than a lovely piece of code that's written without attention to the disciplines mentioned above.

A LabVIEW program that's created using correct style but without appropriate design is like a Corpse Flower: it's beautiful and looks very impressive, but it still stinks.

Monday, July 7, 2008

Insidious Memory Leaks, Part 1

Memory leaks are the bane of any software application that must run continuously for long periods of time. The common causes -- building arrays or concatenating strings within loops -- are relatively easy to spot for most developers. However, leaks can sometimes creep into applications in seemingly harmless ways. Consider the following code snippet, developed by a local integrator, that was extracted from an instrument polling loop that runs in parallel to a main application.It looked innocuous enough to me... Sure, why not use a notifier to stop the loop when the main application is stopped? It seemed to run without problems under Windows. However, when it was downloaded to a PXI-RT target, it leaked like a screen door on a submarine. The fix was simple enough: put the Obtain Notifier outside of the loop and pass in the refnum through a tunnel. You would think that Obtain Notifier would simply return a pointer to the existing notifier, but apparently things work differently enough in RT to cause headaches. I haven't tried it yet, but I bet the same (mis)behavior is evident when obtaining refnums from other synchronization tools.

The moral of the story is a simple one you see over and over again: if a code element doesn't need to be in a loop, then don't put it there. Duh.