Sunday, 30 November 2008

Ethic or Relationship

I have just finished reading a book called “So you don't want to go to church any more” by Jake Colsen. It was recommended to me by a friend and I purchased a copy to read on the train on my way to and from work. I am not going to relate the story but it is a novel about understanding what Christianity is. Jake Colsen is a pseudonym for the two authors.

I find each chapter quite challenging so it is nice to sit back and reflect on what I have read. The big question is about the nature of Christianity and what God is seeking from those who believe. One of the issues raised is the question of whether religion is about the implementation of ethical standards or developing a relationship with the loving God.

The difficulty with seeing it as an ethical standard is that if people don't live up to the standard then they are judged and discarded. There is also the issue of where people gain the strength to live up to some of the exacting standards expected of them. Enforcing an ethical standard is difficult. Finding a balance between judging people and encouraging them within the ethical framework becomes a major issue.

Contrast this with developing a relationship with the loving God. If you are interested in building a relationship then you will overlook some of the behavioural issues. The authors clearly present the idea that God wants relationship and not rule following. They would contend that as we grow in relationship with him, we will find ourselves changing to take on more of his characteristics.

For me, the book carries a secondary message and that is communicating our vision and dreams to others. In the story, Jake is encouraged in his walk by a person called John. John never tells Jake what to do but he seems to always have the right questions to ask and thought provoking challenges to Jake's thinking. Those challenges and questions are always given when Jake is ready to accept them. How often have you been battered by someone who simply believes that you should simply accept all that they say. Their one desire is to have you do what they want. No doubt John believed that there would be a certain outcome to this journey but he also knew that the journey had to be taken at a pace that Jake could handle.

I recommend reading this book although you may not like some of the views expressed especially if you believe that church is an institution.

Reference

Jake Colsen (2006) So you don't want to go to church anymore. Los Angles: Windblown Media.

Saturday, 22 November 2008

FluxUnit-ing Flex programming

In last week's blog, I talked about a number of different testing frameworks for doing test-driven development (TDD) in Flex. This week, I want to continuing the investigation of FluxUnit and how it can be used to do behaviour-driven development (BDD) type scenarios specifications.

In behaviour-driven development, they talk of writing stories and scenarios. The story describes the stakeholder, the feature wanted, and the benefits. The scenarios provide specific examples of the stories. In general terms, a scenario might be written in the form:
Scenario:

   Given [pre-condition]]
When I [action]
Then I expect [post-condition]

For any scenario, there may be more than one given, when, and then combination. For any given, there might be multiple when and then combinations.

Translating this to a testing tool means that I want some method of ensuring that the given pre-condition is created before executing the when action. Once the action is completed, there needs to be a way of testing that the post condition applies. Finally, there is a need to clean up after to ensure that there is nothing hanging around from this test that might interfere with the next test.

For a testing framework like FlexUnit, JUnit, or NUnit, The setup method provides a way of setting up the precondition so that it can be reused in a number of tests. The test method has to perform the when action and the testing of the post-conditions. The teardown method provides a clean up mechanism. Of course, you can be lazy and put everything in the test method but that limits the reuse of the pre-condition set up logic.

A first look at FlexUnit and its describe, before, it, and after functions and you wonder how can this match the BDD structure. My approach isn't the only method but it does provide a clean structure and the output from the tests reads like a BDD specification. Of course, I could change or extend FluxUnit's code to give me the labels that I would prefer but my goal isn't to write a new tool. I simply want to use what is already available.

describe('Verify the operation of clone and equals methods', function():void {
describe('Given a period', function():void {
var basePeriod : CallRatePeriod;
before(function():void {
basePeriod = new CallRatePeriod(4, 7, 10, 48, "WeekEnd");
});
describe('When cloned', function(): void {
var clonedPeriod : CallRatePeriod;
before(function():void {
clonedPeriod = basePeriod.clone();
});
it("should find the two periods are equal", function():void {
expect(basePeriod.startDay).to(equal(),clonedPeriod.startDay);
expect(basePeriod.equals(clonedPeriod)).to(be_true());
});
});
});
});

Note: I have left out the code that goes around this that sets up the FluxUnit environment. The first describe simply acts as a description of the scenario. It encloses all the Given, when, and then combinations that need to be satisfied. The first nested describe states the precondition (the Given). The before is required so that this logic is executed in the right sequence. They aren't essential but in the way that FluxUnit works, any code outside the call to the before method is executed as FluxUnit builds the execution sequence while the before code is executed once the build is complete. This can leave things undefined or containing null values. Using the before method is preferred. The describe nested within the given describe provides the when action. Again the actual action should be placed in a before call. Finally, the it call defines the post-condition tests.

The output from FluxUnit looks like:

The test entered in each of the describes and the it form the basis of the output. In this case giving a clear statement of the scenario, the pre-condition, the action, and the expected post-condition.

If I had used multiple when describe methods then I may have needed to introduce an after method. It wasn't necessary in this case. Multiple it method calls are also possible but you need to decide whether you want to report each post-condition separately.

If I was to enhance FluxUnit, I would change the describes so that the methods that I was calling were scenario, give, when, and then. This would make it clearer exactly what I was trying to do. The other issues that I have with the way that the code is executed are more than likely the result of using Flex rather than a result of the author's design.

The only reason that I will not be using FluxUnit on my current project is that I need a testing framework that will work in a Maven build context. Building the test program would work but I am not sure that it produces the required output so that the available Maven plugins can determine whether the tests all passed. This is on the list for further investigation.

Saturday, 15 November 2008

Flex testing frameworks

Over a month ago, I was asked to do a Flex programming exercise to demonstrate my ability to program. They specifically asked me to use the FexUnit testing framework, and the Carngorm model-view-controller (MVC) framework. In my usual way of investigation, I explored the range of possibilities for testing frameworks for Flex. I did look up the MVC and MVP (model-view-presenter) options but only experimented with Cairngorm.

There are a lot of what I would call “me too” testing frameworks available. Some are little more than announcements of intent to produce one. The one's, that I will review here, were selected because they indicated some unique features rather than saying that they simply implemented the JUnit style framework. The frameworks looked at included FlexUnit, fluint, FUnit, and FluxUnit.

What are my credentials for being able to do such an evaluation? Some thirty plus years ago, I started my career as a programmer in the computer industry. One of my early projects was to write a terminal simulator so the programmers could test their programs without having to have access to the networked terminals. On the same project, I was also involved in building what might now be called an integration tool. The team that I worked in was responsible for support routines that needed to be tested and released for the application programmers.

Over the last nine years as a lecturer in a university, I have been working with and teaching test-driven approaches to programming using both JUnit and NUnit. When I start a new project, one of the first things that I look for are testing frameworks for the languages that I will be using. Also for the DotNET environment, I wrote my own database test unit (DBUnit) and a Forms testing extension for NUnit as a result, I feel that I can talk with some confidence about testing frameworks and what they have to offer.

FlexUnit

I am going to use Adobe FlexUnit as the base testing framework since it is distributed by Adobe through their open source sight and appears to be fairly widely used and supported in the Flex community. It, along with dpUnit, the forerunner of FlUnit, was the first testing framework that I experimented with in the Flex environment.

FlexUnit is based on the original JUnit / SUnit (Smalltalk testing framework) design. It doesn't use reflection but requires the programmer to build suites of tests that are then passed over to the runner. This management code is tedious to write and it is easy to miss adding a test to the suite. A test can be another suite of tests or a refrence to a specific test. You have to be disciplined to ensure that you set up this management code correctly. The usual practice is to put a class method called suite at the front of all the classes that implement tests (i.e. extenf the TestCase class). This method simply adds the tests to a suite that is returned to the caller (i.e. 'theTesSuite.addTest(new testClass(“testName”);'). You then create one or more suites in a hierarchy that enable all the suites to be passed in one batch to the runner.

 var testSuite: TestSuite = new TestSuite();
testSuite.addTest(testCase.suite());
testSuite.addTest(testcase2.suite());
testRunner.test = testSuite;
testRunner.startTest();

For those used to the early versions of JUnit, this conforms to the practice that was then required to set up the test suites.

The avalable asserts also follow the original pattern of assertEquals, assertNotEqual, etc. These are all provided in the TestCase class which you inherit into your own test classes.

There is also available a FlexUnit Ant tasks and the Flex Mojos that can be used to integrate FlexUnit tests into an Ant build or Maven build process. I haven't tested the Ant tasks but have used the Flex Mojos to build and run tests.

Fluint

This was originally called dpUnit and is developed by digital primates IT Consulting Group. Fluint claims a number of advantages over FlexUnit including support for asynchronous tests and the testing of user interface components. At this point in my experiments, I haven't used these advanced features but they are on my list of things to experiment with.

In many respects, fluint follows the example of FlexUnit. For basic testing, one of its advantages is that you don't have to create a suite that contains the individual tests. It uses reflection to identify the tests looking for method names that begin with test or the [Test] attribute on a method. My experience so far indicates that the use of attributes hasn't extended to the test class (TestFixture) or the setUp and tearDown methods. Experiments are ongoing.

Fluint also uses a slightly different approach to building the base suite.

 var suiteArray:Array = new Array();
suiteArray.push(new testSuite());
testSuite.addTestCase(new testCase());
testRunner.startTests( suiteArray);

In many respects, this is cleaner than the process used for FlexUnit. In terms of the base assertions used by fluint, they are identical to FlexUnit. The asynchronous testing seems to be a new set of methods that don't fit the traditional assert structure. I will report on those once I have conducted some experiments.

Fluint also provides Ant tasks and with Flex Mojos abilty to define your own test runner, I suspect that it would be possible to integrate fluint into a Maven build since the Flex Mojos use the FlexUnit Ant tasks to run the tests and Maven allows Ant tasks to be integrated into the build.

At the basic level, fluint, simply provides a different way of setting up the test suites. If it has a real advantage, it has to be in the support that it provides for asynchronous and user interface testing. You may have noticed that the name isn't FlUnit but fluint. The authors argue that it is aimed at integration testing.

FUnit

FUnit follows the more recent versions of NUnit in the way that it identifies tests and in the structure of its asserts. It uses a tag-based attribute model to identify test classes and the tests that they contain. This means that there is less set up coding that needs to be done to make set up the test suites. The test classes still need to be added to a suite but not the individual tests. I am not going to illustrate its coding pattern here as I don't think it differs enough for fluint.

It has one major drawback and this is that in its current version the GUI interface doesn't run the tests. It has to be run through a console runner and as at this writing, the only way that I have been able to view the console is running the test program in debug mode within the Flex Builder. Not exactly helpful. However, the next version of the Flex Mojos will support the running of FUnit tests in a Maven build so it may be useful in that context. What is available in the library presents a nice user interface but doesn't report on the tests. Once they have this implemented, this could be a good package.

The assert structure in FUnit follows the more recent structure introduced in NUnit. That is it uses an Assert class with static methods (i.e. Assert.areEqual()). This has the potential of allowing the user to easily extend the conditions that are available or even the more recent enhancements that have been introduced in NUnit.

FluxUnit

This claims to be a behaviour-driven (BDD) framework rather than a test-driven (TDD) framework. As a consequence, the way that it implements its tests is very different to the other suites covered in this blog.

FluxUnit uses describe, before, it, and after methods and dynamic classes. I am relatively new to Flex so some of the coding patterns used in FluxUnit are novel to me and I will not attempt to explain what they are doing. The code to run the tests is as follows. Note that the creation of the object containing the tests isn't explicitly assigned to anything.

     Flux.setRoot(body);
new CRFluxPeriodOverlap();
Flux.Run();

In terms of defining a test, this looks quite messy to start with but become easier as you gain experience.

     public dynamic class firstSpecs
{
public function firstSpecs()
{
var self: firstSpecs = this;
new Flux(this).Unit(function():void {
with (self) {
describe('initial state', function():void {
before(function():void {
// setup code;
});
it("expected result", function():void {
expect(varaible).to(equal(),value);
});
after((function():void {
// clean up code
});
});
}
});
}
}

This appears to work by defining dynamic functions. It is possible to nest describes and to have before and afters that are global to all the describes. The user interface reporting is also interesting since it uses the text from the describe and it functions to structure the output.

The authors have an example of using ot for asynchronous testing but not for user interface testing. There is also no reference to Ant tasks or ability to work with Maven Mojos.

Although I like the terminology used in behaviour-driven development, I still find the tools uncomfortable to use. At least this one is workable if you apply a simple template. It is however, to misplace the bracketing and take some time to get it corrected.

Conclusion

At the basic functionality level, there is little difference between any of these frameworks. FlexUnit appears to be a proven framework and is able to be used in Ant and Maven builds. FUnit appears to have Maven support and possibly also could be used in Ant builds but its lack of a GUI is frustrating. Fluint has the additional functionality for asynchronous tests and user interface component tests. It also has Ant support and may be easily integrated into a Maven build. FluxUnit provides more readable output.

My personal order of preference at this point is fluint and then FlexUnit. I would use FUnit ahead of FlexUnit if the graphical user interface worked. I intend to keep experimenting with FluxUnit simply because I like the BDD style but until I see support for Ant and Maven builds, I wouldn't use it on a project. I would also like to see the coding structure improved so that it was easier to set up and use.

Saturday, 8 November 2008

Sabbaticals and calling

In the process of looking for paid employment, I picked up a book written by MacKenzie and Kirkland (2002). In a chapter on sabbaticals, they talk of the need for a time of rest; taking time out from the normal routine to be restored and to re-evaluate the direction of our lives.

Supposedly, the time that we had away last year was a sabbatical but it was filled up with the normal routine of research (working on the PhD). It certainly had some elements of rest and restoration but it wasn't the break from routine that enables real refreshing.

Now, as I work in industry, I am feeling like I am having a sabbatical. The work is different and so are the pressures. I am seeing my academic work in a new context and considering how I might apply all that I have learnt in a new context.

Another chapter in their book focuses on calling. Here they make an interesting observation that our call isn't to a task. Rather it is to a relationship with God and with those around us. It is living out that relationship in whatever context we work that really illustrates our calling.

In a Christian context, a call is often associated with entering Christian work but MacKenzie and Kirkland are emphasising that every Christian is called. The way that we work out our calling will show in the way that we do our work and interact with those around us.

Reference:

MacKenzie and Kirkland (2002) Where's God on Monday? Christchurch: NavPress.

Agile theory versus agile practice

After 17 years working as a lecturer, I have returned to industry as a developer. I return having endeavoured to encourage students to use test-driven approaches to software development and having completed research on the ways that software practitioners express their understanding of object-oriented software development. My knowledge of the implementation stacks is more theoretical than practical although in developing my teaching samples, I endeavoured to follow agile practice. The bulk of my code is has unit tests.

In the process of obtaining this contract, I had to do a programming exercise. While doing that exercise, I learnt again about doing small incremental changes with your tests. If a test does too much then it becomes too difficult to implement. If you find yourself going to the debugger to determine the cause of the problem then the test is probably asking for too big a change. Small incremental changes can be implemented rapidly and tested quickly.

One week into my assignment and I am thinking about the challenges that lie ahead. This week has seen me learning about the system that I am to work on and installing the development environment on the computer I am to use. The development environment uses the Maven build tool and a source control system, and the appropriate testing frameworks are installed. Having the tools installed doesn't lead to an agile process. One of my first observations was that for the code that I am to work with, there are no tests. I will primarily be working with Flex on user interface issues but there is a lot of functionality in these user interfaces and I see no tests.

As I look at the physical environment with its individual workstations, I wonder whether an agile process will work in this environment. Agility is a mindset and not an environment. Why don't programmers write tests first? It comes down to the way that they perceive or understand the programming task. For me, an upfront test or behaviour specification isn't about proving that my code works. It is an executable specification of what the code is expected to do. The first part of any programming exercise is understanding the requirements. If I can turn those requirements into executable tests then not only am I showing that I understand what is required, I am also showing that I understand how I will verify that I have satisfied those requirements. Whether I use TDD or BDD to drive the development of software, the key thought is that I am writing executable specifications that will verify that the code I am about to write does what it is supposed to do.

To write an executable specification, I have to make design decisions about the implemented solution. I have to decide what objects I will need and what behaviour those objects will implement. I need to think about the architecture of the system and how that architecture might change as new requirements are specified. The executable specifications will help ensure that my changed architecture still delivers what is required.

I don't see the writing of executable specifications as that different from what I might have done thirty years ago when I was COBOL programming. When I wrote code, I was always thinking about how I was going to test the code. Some time those tests where automated but there was always a need to ensure that the data was there in order to test out the required features of the software. What I see different about my practice now is that I am creating executable specifications at the first opportunity rather than leaving things until I have some code to test. I am not so much writing the specifications to test my code as writing them to reflect my understanding of what is required. I hope that this is reflected in the names or descriptions that I give my executable specifications.

Although the TDD or BDD tools might help me create the executable specification, it is really the mindset that I bring to the task that determines whether I apply agile practices. My conception of the task will dictate how I approach the task. If I want agility in my practice then agility has to drive my thought processes.