Wednesday, August 10, 2011

Fun With Subpanels, Part 1

Introduced in LabVIEW 7.0, the Subpanel is a container that is used to display the front panel of a subVI within the front panel of a Main VI, allowing users to interact with the subVI's front panel controls within the bounds of the main VI. The most typical use-case that I've encountered for a Subpanel is in conjunction with a VI which is being run parallel to the main application using VI server. In this common scenario, a reference to the VI is opened, the VI is set to run using an Invoke Node, and its reference is inserted into the Subpanel. Subpanels are a cool and flexible way to encapsulate modular and re-usable functionality, to access controls in a daemon, and to implement plug-in architectures.


The following code snippet shows a neat trick which you can implement using a Subpanel: a universal subVI front-panel viewer. This utility opens up a reference to the target VI specified by the Path Control, and in turn, references are opened to all dependent subVI's within the target's hierarchy. A ring control receives the names of the subVI's, which allows the user to select the desired subVI to view. As long as the subVI is not already in memory (previously opened front panel, for example), this is a neat way to view the controls and indicators of any subVI within the main application.



In the next installment, we'll look at another way to creatively and profitably use Subpanels, in what I call "the poor man's XControl."


Wednesday, December 15, 2010

How to Compare Something With Nothing

Nothin' from nothin' leaves nothin'
You gotta have somethin' if you wanna be with me
- Billy Preston, #1 hit, 1974


No, this hasn't suddenly become a blog about mid-70's R&B. As I considered the subject matter of this article, I just couldn't get this tune out of my head. Regardless, if you're not familiar with Mr. Preston's music, you might want to check it out. In addition to his successful, Grammy Award-winning career as a solo artist, Preston collaborated with some of the greatest names in the music industry, including The Beatles, The Rolling Stones, Ray Charles, Joe Cocker, Elton John, Eric Clapton, Bob Dylan, Aretha Franklin, Sly Stone, Johnny Cash, Neil Diamond, and the Red Hot Chili Peppers. Billy died in 2006.

And now back to our regularly scheduled programming. One of the many cool things which I love about LabVIEW is the ability of most of its primitives to be polymorphic. Similar to the general meaning in Computer Science, polymorphism is a programming language feature that allows values of different data types to be handled using a uniform interface. For example, the comparison palette is almost completely generic; you can use the same equality or inequality primitives for integers, floats, strings, enums, or arbitrarily complex compound structures (e.g. clusters) comprising all of the aforementioned. Particularly handy is LabVIEW's capacity to be polymorphic with arrays, which in many cases eliminates the need for looping. However, with this convenience comes behavior which may or may not suit your needs.

Consider the following code:
Notice that the two arrays are of different length. In this case, LabVIEW generates an output of the length of the shorter of the two arrays. Functionally, this is equivalent to the next code snippet which explicitly uses looping:

In the case of more than one array wired to the frame of a looping structure, auto-indexing works in a similar manner: the shortest array wired to the frame dictates the number of iterations. Maybe this is what you want. Then again, maybe not.

I recently encountered a case where I had a text ring array on a GUI which was used to allow the user to select, on the fly, which named tag values would be displayed on a corresponding array of doubles. A straightforward name-value pair display, with the twist of allowing the user to dynamically change which items appeared where in the list. This feature needed to be serviced whenever a selector was changed, and change detection was accomplished in the classic manner using a shift register to track previous values. However, the basic inequality primitive could not correctly handle when the list was made longer by the user. What was needed in this case was the ability to compare something against nothing. Here was my solution:


Not exactly rocket science. Perform the basic comparison, and at the same time figure out the difference in lengths between the two inputs. Determine the default behavior that you want when you compare a defined value against an undefined value, construct the correct length array with those values, and concatenate.

If there has to be one, I guess that the moral of the story is that sometimes these wicked cool oh-so-smart features built into LabVIEW do exactly the opposite of what you want them to. There ain't no such thing as a free lunch.







Sunday, February 8, 2009

Things That Have Bitten Me Lately, Part 1

Despite having been an active LabVIEW developer for the past ten years, sometimes I still fall prey to beginner's mistakes. Well, when I realize what the problem is, they feel like beginner's mistakes... sometimes, though, the root causes can be a little bit obscure. Today, we're going to discuss For Loops. More specifically, what can happen when a For Loop does not execute.

OK, so here's the scenario. I wrote a pretty simple subVI intended to write multiple key values to a config file. Nothing more than the Write Key.vi in a For Loop, fed by arrays of key names and values. Worked like a charm. Then I strung a bunch of these together, and noticed that some of my downstream groups of keys weren't being written. I stuck a bunch of probes along the chain of error clusters and noticed the ubiquitous -- and often confusing -- error 1 being produced. Then I similarly probed the file refnum along the chain, and lo and behold, one of the VIs was outputting an invalid refnum. Take a look at the diagram below and see if you can figure out the bug.
Here's what was going on: the offending VI was being fed a value array of length zero. Thanks to autoindexing, the For Loop wasn't executing. As a result, the input (valid) refnum wasn't being passed to the output. The rule of thumb with tunnels on For Loops is that, if they don't execute, the tunnel will take the default output value for the given datatype. With refnums, that means something invalid. Ooops.

The solution is really, really simple: replace the tunnel with a shift register. The shift register acts as a pointer to the input wire's value, so that even if the For Loop doesn't execute, the output wire will still point to the correct value. Problem solved. For the skeptical among you, try the code shown below.
We tend to think to use shift registers when we know that we will be changing a value of something from within a loop. In the case shown above, you'd naturally think, "well, the value of the refnum is static, so a tunnel in and out will be sufficient." It will be fine, as long as the loop runs at least once. So here's the new rule for For Loops: if there's a possibility that they won't execute, make sure that everything that needs to have a valid value gets passed through a shift register. Alternatively, you could just wire the refnum around the loop, but that ends up looking clumsy and ugly, and the style police will jump all over you.

Thursday, September 18, 2008

Retriggerable First Call

Over the years, I've grown to like the First Call? primitive on the Synchronization palette. It's a simple, useful tool if you want to run a subVI, or a section of a block diagram in a case structure, only once when the VI first starts up. I find it particularly useful for initializing the behavior of modules that I've implemented as functional globals. Recently, though, I found that several of the modules and applications I've been working on needed the ability to do what I call a "soft restart." That is, they need to be able to be returned almost completely to their start-up conditions, without completely stopping and restarting the applications. This is particularly true for my RT applications, which, lacking a GUI, can't easily and directly be stopped and restarted. Herein lies the quandary: First Call? returns true only once, but to perform a soft restart we need the ability to effectively reset it so it that returns true again, on demand. The solution: Bob's handy-dandy Retriggerable First Call routines.

First Call Retriggerable.vi













Used in a manner similar to the First Call primitive, this reentrant VI will return a TRUE the first time that it is called. It also will obtain a reference to a named Notifier (and keep said reference on an uninitialized shift register) which is used to accept the retriggering notification. There is an accessory VI, called Retrigger First Call Nodes, which when called will cause all instances of this VI to again return TRUE the first time they are called after the Retrigger VI is invoked. In this manner, we can use this VI for initialization functions, while retaining the ability to restart processes if necessary.

Retrigger First Call Nodes.vi


















Pretty straight-forward: obtain the Notifier reference on first call, keep it on a shift register, and send notification every time the VI is called. Unless, of course, an error is passed in, in which case do nothing.

For completeness and cleanliness, I suppose there should also be a third component to this utility: a clean-up routine to destroy the notifier reference. Also, looking again a little more critically at the Retrigger First Call Nodes.vi, we could make the diagram a tad simpler. What we really want is to get rid of the error-handling case structure, and instead wire the error in and out cluster through the Send Notification. As my professors in college used to say, "I will leave this as an exercise for the interested reader."

Monday, September 8, 2008

Put Error Handling in Every SubVI

Picture yourself in this all too common scenario. You've been developing a glorious application for weeks or months, and you're near the end. It's your masterpiece, a real showstopper, your veritable magnum opus of creativity and cleverness. You've gotten to the detailed testing stage, and you're confident that everything will come together smoothly. The application will run flawlessly and you'll be a hero in the eyes of your colleagues and customers. Under budget, ahead of schedule, with a sexy GUI and clean block diagram. Thanks to LabVIEW, you've been able to test each and every one of your functional modules as you've developed them, and stand-alone, each one is bug free. The sun is shining, the flowers are blooming, and life is great.

Then you start running the integrated application. Suddenly, things don't work so well. Routines that you thought were completely debugged are throwing errors you've never seen before. Or worse... nothing is causing an error, but your test inputs are not producing the correct results. Things are behaving in a weird and unpredictable manner. Testing is going poorly and taking far more time than you had budgeted. Your customer is demanding to know when you'll be finished and your answers are growing vague. You can see your schedule leeway rapidly evaporating and you're losing confidence in your ability to deliver. You haven't seen your wife and kids in days, the dark clouds are closing in around you, and life sucks.

How many of us have faced this looming disaster with fear and trepidation? What could you have done to reduce the anxiety and make testing at least a little bit more predictable? I won't suggest that there is one single, silver-bullet solution that will magically convert your software dung beetles into amethyst scarabs. However, there is one really simple discipline that will make your job of isolating bugs far simpler: put error handling into every single subVI that you write.

Sounds too simple? Not too simple; just simple enough to be easy and very useful. Let's review first the easiest way to approach this.

The most basic subVI error handling consists of a case statement enclosing all of your functional code in each module, with the input error cluster wired to the selector terminal. The error case will execute nothing, and merely pass through the error cluster to the output. In the non-error case, where your actual code resides, you wire the error cluster through your code as possible, picking up all elements that handle errors (familiar examples include file I/O and synchronization functions). In this manner, if the VI kicks an error, it passes it out to the next VI in line. For any VI, input errors inhibit any further processing.
The most obvious result is that the source of any error produced will be clear in any chain of subVI's. Put a probe on the output error clusters in a successive chain, run your code, and magically, the source of the error becomes painfully apparent. Errors can be isolated easily and unambiguously. For no other reason, this simple approach is worth its weight in gold.

With error clusters in and out of every subVI, you can enforce data flow dependency that might otherwise be difficult or impossible. With this, you can ensure the steps through which your program flows. Results? You know exactly what executes when. You dramatically reduce the possibilities of timing ambiguity or race conditions. You eliminate the need for artificial ways to guarantee program execution order, such as sequence structures. And you can probe or breakpoint all intermediate values, step-by-step, from one VI to the next.

As a useful side-effect, putting error in and out clusters on all of your subVI's also helps to standardize the icon/connector panes of your work. Many authors have advocated picking a single pattern and using it on all of your work; popular ones include the 4-2-2-4 and 5-3-3-5 terminal layouts.

There are certainly more sophisticated approaches to error handling than what I've presented here. Peter Blume devotes an entire chapter to the subject in his book, for example. The point of this article is this: just start doing it. You'll see the benefits immediately. First, get into the habit of including at least the most basic error handling in your subVI's. Then you can start to get fancy.

The scenario described in my opening paragraphs is one that I've either witnessed or lived through too many times. This approach is so easy to do, and such a powerful tool to help make your code more robust and easier to debug, that there's no excuse not to do it.

Tuesday, August 5, 2008

U64 Millisecond Tick Count Utility

We all know and love the built-in tick count (mSec) function in LabVIEW. It unfortunately has two intrinsic limitations which, at one point or another, we've all probably encountered. The first is that its output is a U32, which means that it wraps after 2^32 - 1 mSec, or roughly every 49.7 days. The second difficulty is that we don't know when the wrap will happen; it could be tomorrow, or seven weeks from today. The value of the millisecond tick doesn't seem to be tied to any external temporal reference. Well, I've got a solution that at least tackles the first issue... it's the U64 Tick Count.vi shown below.




Its operation is pretty simple and self-explanatory. It's a functional global, in the sense of using uninitialized shift registers in order to maintain state data between calls. When the U32 time function wraps around, 2^32 is added to a running offset count, and the VI has a capability to properly initialize things on its first call. Since I use this in both Windows and RT applications, there's a Conditional Disable structure to provide the appropriate mSec tick function for each. (I'd like to post the VI itself for download, but I haven't yet figured out how to do this on Blogspot. If any of you intrepid readers could fill me in, I'd be grateful.)

With a U64 output, this millisecond timer will wrap approximately every 584 million years. I would guess that this should be sufficient for most applications.

Thursday, July 10, 2008

The Importance of Style in LabVIEW Programming

Earlier this week, my wife, son, and I visited the teaching greenhouse at UConn, to see a specimen of a Sumatran Corpse Flower (amorphophallus titanum) in bloom. Relatively rare in captivity, these plants are the rock stars of the horticultural world, producing a single flower six feet tall which lasts less than a day. They also smell like a putrid rotting cadaver, adding to the fascination. What does this all have to do with LabVIEW, you ask? Read on and you'll see.


Over the years, much has been written and discussed regarding the importance of good programming style. Several years ago, the (now defunct) LTR newsletter had an excellent article entitled "Rules to Wire By"; it's still relevant, and you can read it here. An authoritative book on the subject, The LabVIEW Style Guide, was published last year by my former boss, Peter Blume. It's an exhaustive compilation of rules, recommendations, examples, and illustrations of good and bad LabVIEW style. It covers a lot of ground and can serve as a great reference for those seeking to perfect this aspect of their LabVIEW programming skills. This book grew out of a presentation Peter gave at NIWeek 2002, which you can download here. The terrific books by Gary Johnson, Jeff Travis, Jim Kring, Jon Conway, and others, further strengthen the LabVIEW references available to experienced programmers and neophytes alike. With these and all of the myriad other resources available, you'd think that there would be no excuse for some of the poor code which we've all seen in our careers.

The trouble is, style is not enough.

Recently, I witnessed a project that went from bad to worse. The original programmer produced an application that didn't look all that good, and functioned poorly. The project was re-assigned to another engineer in the company, a CLA at that. She refactored the code according to the best style standards available, but didn't pay enough attention to the underlying program's structure, and as a result its performance didn't meet the customer's requirements. Re-assigned yet again, to yet another CLA, nothing really improved. The code got prettier and prettier with each iteration, but no one addressed the fundamental design issues which prevented the program from meeting its goals.

LabVIEW is an easy language to learn and easy to start programming. It's also the easiest language to program poorly, and unlike text-based languages, your code will still run -- maybe even run fairly well. LabVIEW seems to cast a spell over even experienced programmers in other languages who seem to forget everything they used to know. People jump right into coding without giving any thought to program design. The old standard concepts of structured programming, hierarchical organization, abstraction, modularity, of loose coupling and tight cohesion... gone in a flash. By paying attention to style, your poorly-written code can be made to look marvelous. Personally, I'd much rather deal with a well structured application that happens to have crooked wires and overlapping block diagram objects, than a lovely piece of code that's written without attention to the disciplines mentioned above.

A LabVIEW program that's created using correct style but without appropriate design is like a Corpse Flower: it's beautiful and looks very impressive, but it still stinks.