June, 2012


21
Jun 12

Program Configuration Space Complexity

Taking the Coursera Probabilistic Graphical Models course has had me thinking a lot about complexity. Lately, program state complexity in the context of testing in particular.

How many discrete tests would it take to ensure every combination of inputs is correct? Let’s do some back of the napkin math and see what we come up with. Please excuse my amateurish Latex.

First, let’s start with a thought experiment. Imagine a program (in any language) in which all of the configuration options are expressed as enumerations. Now imagine that each configuration option has some impact on the code of every other.

Prod(Card(C_opts))

We’ll call this our worst case testing complexity as it completely ignores how everything is wired up. This effectively treats the entire program as a giant black box.

The terrifying thing about this to consider is that the addition of a simple Boolean configuration option can double your testing complexity. Now take a moment for that to sink in. Just adding a simple flag to a console application potentially doubles the test space.

This previous description only applies to a program where every configuration option is global. What about nice well separated classes?


If we consider each as its own little program, a non static class is only dependent on the cardinality of its internal state space times the cardinality of each of its methods input spaces.

For comparison, how might pure functional programs look?


Interesting that the testing complexity of a pure functional program is equivalent to that of object oriented programming done only with immutable classes. Each only has a single state so the whole product over options term drops away and we are left summing over all of the methods in our program. Now how many object oriented languages give nice semantics for immutability? That’s another question entirely.

Admittedly, is seems as though those very states when taken out could potentially turn into a roughly equal number of new classes thus giving no benefit in terms of test count. However, in my experience this is not the case. Will a given class state change the behavior of every single method? Also consider that if those states might change mid-computation things will become much more complex.

We seem to be stuck on the cardinality of our method or function inputs as our primary driver of testing complexity. Essentially this puts us back up to our original thought experiment, just in manageable chunks that are useful for pin-point debugging. We’ll never be able to test every state. The best you might do is using a random testing tool like QuickCheck.

As for languages which do not restrict inputs by type at all, they explode the testing space by forcing consideration of how every construct might interact with a given function. This could be seen as a new very large term in both the object-oriented and functional models above.

Finally, this makes me lament the lack of any kind of range value restriction in popular typed languages. Once we’ve done away with statefulness it’s the most glaringly obvious area where we can reduce complexity. The good news is we might start seeing some help from academia along these lines within the next few years.


11
Jun 12

NYC Progressive F# Tutorials 2012 in Retrospect

It was the best of times, it was the… Actually, it was all just fantastic.

On the beginner track Chris Marinos, Phil Trelford, and Tomas Petricek worked tirelessly in teaching F#. I was thoroughly impressed with the quality of their tutorials and would recommend seeking any of them out if you wish to become more proficient in F#. By the time we got to the contest at the end almost everyone was ready to compete with just a little help getting started.

On the advanced track attendees made type providers with Keith Battocchi, learned to use Adam Mlocek’s fantastic numerical computing library FCore (you can’t get any closer to Matlab syntax), and got some hands on time with the Microsoft Cloud Numerics team. Paulmichael Blasucci ran the contest on that side and noted that few had trouble and that the competition was just brutal.

On both sides the rooms were almost always packed and I didn’t hear a single complaint about the quality of the tutorials or speakers. Everyone was just thrilled to be there and stayed focused on learning the entire time. I’ve seen few conferences that kept the attendees so enthralled. Kudos to everyone involved for making such a great event happen.

Now, on to the links and materials (they’ll be updated as they come in).

Event Photos are up on Facebook!

Beginner Track:
Chris Marinos’s F# Koans
– Phillip Trelford’s Writeup on Day 1 and Day 2

Advanced Track:
– Keith Battocchi’s Type Provider Tutorial Code and Slides/Docs
– Adam Mlocek and Tomas Petricek’s FCore Slides and Code
– Roope Astala’s Cloud Numerics Tutorial Slides and Tutorial Examples

Both Tracks:
I’m planning a Silverlight release of the contest code with the contestant sample AI in an upcoming blog post. For now, you can find the code in its github repo.

Social Media:
Chris Marinos’s Blog and Twitter
Phil Trelford’s Blog and Twitter
Tomas Petrice’s Blog and Twitter
Don Syme’s Blog and Twitter
The Cloud Numerics Blog
SkillsMatter on Twitter

Going Further:
F# Training in London and NY
FCore Library