Saturday, 27 December 2008

Flex MVC frameworks

Over the last two months, I have been involved in writing Flex code using the Cairngorm framework. It claims to be an MVC framework. On closer examination, Cairngorm provides an alternative event mechanism that focuses on the use of commands, a standard location for the implementation of services, and a set of development principles.

The Cairngorm framework encourages the use of singletons for:

  1. A front controller that is used to register commands and events that fire them,
  2. A service locator used to hold dictionaries to remote object, http service, or web services. The service delegates use this to locate the correct service object,
  3. A model locator used to gain access to the applications data model,
  4. A view locator that allows the location of views from commands if required, and
  5. An event dispatcher that dispatches events using Flash event mechanism.

The event dispatcher is behind the scenes as the programmer doesn't need to be aware of its existence. It is the only one of these singletons that is actually referenced by any of the Cairngorm code. None of the others actually need to be singletons.

The Cairngorm framework also recommends the defining of event classes for each event in the application. The primary purpose of the event class / object is to define the data to be passed to the command executed when the event is fired. An ICommand interface is provided to give a standard command type. A series of events can be chained together by defining commands that extend the sequence command class.

The view locator and view helper are provided so that reference back to view objects can be achieved if required in the application. A value object interface is provided for model value objects.

The basic processing structure us that user gestures are translated into Cairngorm events which are dispatched causing the associated command to be executed. The command causes the model to be updated. Through the data binding mechanisms, the view is notified of changes and refreshes.

The first impression when trying to use the Cairngorm framework is that all these singletons restrict the ability to test drive the code or to have multiple contexts within the application. Since none of the singletons are actually required by the framework, it proves easy to use the Cairngorm event mechanisms and the basic structure but not use all the singletons.

In my experimentation, in order to be able to test code using remote objects, I have mocked out the service locator and the associated remote objects. A little bit of exploring of the Flex SDK on the definition of remote objects showed that this is relatively easy to do. We use a application controller singleton to hold references to the front controller (register of events and commands), service controller, and the model. A façade class defines static functions that make it easier to use our code structure.

I have implemented an initialisation entry point for the application controller that allows me to define alternative front controllers, service controllers, and models. I have also implemented a method to destroy the singleton so it can be reset for a second set of tests.

With this structure, it is possible to inject test code to verify the correct firing of events, to trap service calls to verify that they are being set up correctly. So far, I haven't replaced model elements but that could be achieved as well. More importantly, through mocking out the services, it is possible to fire test data back into the application to verify its overall operation.

Instead of using the view helper and view locator, we have implemented our own notifier. This is not too different from Cairngorm events except that it doesn't restrict the number of receivers of the notification and the receiver can say what method to call when the notification is sent. With out notifier, we could discard the Cairngorm framework completely but we already had a commitment to its use.

Sticking rigidly to the Cairngorm conventions introduces a lot of redundant code. For example coding an event class for every event or the delegate structure for accessing services. We have eliminated the use of event classes but at this stage are still using delegates. Some code refactoring may see those removed.

For the views, we have defined a mediator actionScript class that extends the base class used for the view that it relates to. The mxml view file then uses this as its base tag thus from the perspective of the mxml making the actionScript class appear to be part of the mxml. The advantage of this is that we implement most of the code that carries out actions for the view in the mediator class. We can then test drive this free of the mxml components. I have endeavoured to enforce that the only way that the mxml view component can be updated is though the data binding mechanisms. In a sense, the mediator is a supervising controller (Fowler 2006).

We have also examined the PureMVC framework and concluded that like Cairngorm, the advantage of PureMVC is the conventions that it proposes rather than the specific code that it implements.
PureMVC suggests the use of:

  1. Views with associated mediators that sit between view and rest of system (our naming comes from here although I originally called them presenters),
  2. A controller and commands with both a simple command and a macro command (similar to Cairngorm's sequence command),
  3. A model accessed through proxy objects (The proxies provides access to remote data through services. Domain logic implemented in proxies),
  4. A façade that acts as the gateway to the application (Brokers requests to the model, view, and controller. Initialises the controller with command mappings. Uses commands to prepare model and view), and
  5. Observers and Notifications (These provide a publish, subscribe mechanism. Notifications trigger commands. Mediators send, declare interest in and receive notifications. Proxies send but do not receive notifications).

Again most of this is convention rather than requirement. The only real code in the PureMVC framework is in the observers and notification structure. The rest is primarily interfaces so that conventions are being followed.

The idea of both frameworks is to reduce the binding between views, the business logic, and the model. Cairngorm has chosen to use an event structure that restricts the events to firing one command. PureMVC has chosen to implement a more flexible notification structure. Both frameworks are quite small. The core components (the event or notification mechanism) can be implemented quite quickly by any experienced programmer.

My conclusion is that you should think carefully about what you want to achieve and the overhead introduced by using these frameworks. In my case, my objective was implementing the idea of a supervising controller and endeavouring to ensure the minimum amount of dependency in the code. I have already decided that I wouldn't use either Cairngorm or PureMVC on future projects unless mandated by the client. A simple notification mechanism and some careful coding conventions will serve my purposes.

Reference

Fowler, M. (2006, 19 June). Supervising controller. Retrieved 9 November, 2008, from http://martinfowler.com/eaaDev/SupervisingPresenter.html

Thursday, 25 December 2008

It's tough learning to trust

For almost two months now, I have been working with new colleagues and endeavouring to understand their abilities and hopefully, they have been learning about my strengths. It has proven difficult at times as we have each struggled to adapt to our role within the software development project. How much faith and trust do we put in another's abilities or do we keep checking to see whether they are doing what we asked and in the way that we wanted?

I used to have a boss, who I said I trusted not to do what was right for the department. That was a negative trust. If someone worked for me and I had that kind of trust then I would be suggesting that he looks at working somewhere else.

This process of learning to fit into an organisation and to trust my colleagues in a positive way has made me rethink what it means to trust God. Our colleagues don't necessarily do things the way that we expect. Sometimes what they do seems to be totally wrong but unless we want to do everything ourselves, we have to learn to trust them and focus on the things that we need to get done. Likewise God isn't going to operate in the way that we expect but if we don't trust Him, we will never understand what He has planned for us.

In Jeremiah 29:11, God said through Jeremiah, “I know the plans I have for you, plans to prosper you and not to harm you, plans to give you a hope and a future.” It certainly didn't look that way. Israel was going into exile in Babylon. The promised land seemed to being taken away from Israel. The question becomes are the plans that we have for ourselves the same plan that God has for us?

Through this adapting to a new work environment and new work colleagues, I have tended to reflect on how I might respond to those with who I am having difficulty. I form big elaborate strategies and then find when it comes time to carry them through that they no longer apply. Even this week leading up to this Christmas break, I was thinking that I needed to tell someone something only to discover that when we did meet that none of those things seemed important any more. I planned my responses but never allowed God to lead me in those situations. In effect, I didn't trust Him to lead me through even though we are confident that I am in this job because He put me here (but that is another story).

Last weekend, we listened to an Elvis Gospel CD. One of the tracks on that CD has been reccurring in my thoughts. “Lead me O Lord, won't you lead me. Lead me, guide me along the way. For if you lead me, I can not stray. Lord just open my eyes that I may see. Lead me, O Lord, won't you lead me.” This sounds a great prayer but... !!! The theology doesn't make sense to me any more. Do I really believe that God doesn't seek to lead me? Maybe the words of this song should be “Lead me, O Lord, yes, you lead me, yes, you lead me and guide me along the way. If I opened my eyes then I would not stray.”

Why do I have a problem with the theology? It assumes that we have to plead with God for Him to lead us. It assumes that this isn't something that He already wants to do and is trying to do. Remember, He knows the plans that He has for us. I have begun to see that when I pray for what God already is doing or wants to do in a way that asks Him to do it then I am breaking His heart, I am not trusting Him to live up to His promises. We have to learn to trust Him and act on that trust.

There is another song that we used to sing with youth groups. Its words went something like this: “Change my heart, O God, make it ever new.” “Make me more like you.” I can now see God responding “I'm trying, but you resist” or “Yes, I will but I don't think that you may like it.” The problem isn't that God isn't working to change us or lead us. The problem is that we have already decided where God should be leading us and what he should be changing us to. When it doesn't happen as we expect, we plead with God to lead us, guide us, and change us. Where is our trust that God is doing what he has promised and that he will make us what he wants us to be if we will let him.

As I write this, we are celebrating Christmas Day in New Zealand. A day on which Christians celebrate the birth of Jesus. As I have reflected on this issue of trust and God leading as he has promised, I have also thought about the number of times when I have been asked why God doesn't reveal himself to us. The thing is that He did and those who should have recognised Him rejected Him. Barclay in his commentary on the shepherds coming to see the infant Jesus (Luke 2:8-20) relates an interesting story. He describes a “European monarch who worried his court by often disappearing and walking incognito amongst his people. When he was asked not to do so for security's sake, he answered, “I cannot rule my people unless I know how they live.” (Barclay 1975). The monarch knew the importance of identifying with his people in their daily struggles. Being locked away in a palace didn't help him understand the struggles that they go through each day.

God in Jesus walked humbly amongst us. He shared in our struggles and understood our temptations. Each day he continues to walk with people who have learnt to hear his voice and acknowledge his leadership in their lives. He seeks a relationship with us. Instead of praying for God to make himself known to us or to walk with us, we need to learn to thank him for making himself known and walking with us each and every day. The transformation of theology makes a big difference.

As you read this my challenge to you is to look at your day and consider where you allowed God to lead and where you shut him out. Thank Him for doing as He has promised and apologise for when you have closed the door and locked him out.

Reference

Barclay, W. (1975). The gospel of Luke (revised ed.). Edinburgh: The Saint Andrews Press.

Saturday, 20 December 2008

FlexUnit revisited

On my current project, I have been working with FlexUnit and discovered that it will locate test methods in a class based on their name starting with test. I am using FlexUnit 0.85 since that is currently in the repository used by the Flex mojos.

In all the examples for Flex Unit, they show the use of a method that adds individual test methods to a test suite using 'addTest(new TestClass(“testName”).' This method to build the suite is usually placed in the test class (TestClass). In looking at trying to build a tree of test suites, I looked at addTestSuite method. Reading the documentation at the front of the class, it talks of adding a test class by using new TestSuite( TestClass ). This approach adds the test class as a suite by locating all methods that have names starting with test. This reduces the coding overhead and makes it easier to use.

FlexUnit 0.85 doesn't seem to report the suites in a tree hierarchy. This is rather frustrating as the number of tests grow. Flex Unit 0.90 does correct this but still lacks the use of attributes for identifying tests and some of the other features now in JUnit for Java and NUnit for .NET.

Sunday, 14 December 2008

Assertive tests

Continuing my review of unit testing frameworks, I have been looking at how you might evaluate the features. Some of the issues that I have looked at include:

  1. the way that tests are identified

  2. the type of standard asserts or comparisons provided

  3. the ease with which the assert framework can be extended

  4. the coding style of the asserts

  5. the ability to run the tests in different environments (i.e. the GUI, Ant tasks, and Maven builds)

I am sure there are other comparisons that could be done but I wanted a quick ability to compare the different frameworks. The focus of this blog is on the asserts supported by each of the frameworks. My assumption when I started working with these frameworks is that they would all support the common set since most are built on the JUnit framework. It seemed logical that all would provide the same asserts unless the language made one meaningless or there were new conditions that needed special handling.

So what assert conditions are supported by each suite?

Condition

FlexUnit

Fluint

FUnit

FluxUnit

Equals

4

4

4

4

Not equal

4

4

Object equals

4

4

Object not equal

Same object

4

Not same object

4

String match (1)

4

4

4

String no match

4

4

String contained

4

4

Not contained

4

String empty / not empty

4

4

Starts with

4

Ends with

4

Equal ignoring case

4

True

4

4

4

4

False

4

4

4

4

Null

4

4

4

4

Not null

4

4

4

4

Undefined

4

4

4

Not undefined

4

4

4

Fail

4

4

Instance of

4

Not instance of

4

Is finite

4

Is NaN

4

Is >

4

Is >=

4

Is <

4

Is <=

4

Collections

4

Asynchronous tests

4

(1) Regular Expression string match

It is possible to extend the asserts in all of the frameworks.

FlexUnit defines its asserts in a static class, Assert, that is extended by the TestCase class. This is simply so that the user doesn't have to type Assert.assertEquals().

Flunit defines its asserts as protected methods within the TestCase class. This has a similar effect to FlexUnit with respect to how the asserts are referenced.

FUnit uses static methods in an Assert class that is used by typing Assert.areEqual().

FluxUnit provides a mechanism for defining your own matchers. This is described in the documentation. These are referenced trough the expect().to(equal(), value) structure. The equal is a matcher. As well as the .to method, there is a to_not method. The biggest difficulty with defining matchers is understanding the dynamic function and class mechanisms used in Flux units programming.

If you are interested in maximum flexibility without having to extend the framework then FUint has the greatest range of predefined asserts. In this respect, it provides a good base for doing unit testing.

Sunday, 30 November 2008

Ethic or Relationship

I have just finished reading a book called “So you don't want to go to church any more” by Jake Colsen. It was recommended to me by a friend and I purchased a copy to read on the train on my way to and from work. I am not going to relate the story but it is a novel about understanding what Christianity is. Jake Colsen is a pseudonym for the two authors.

I find each chapter quite challenging so it is nice to sit back and reflect on what I have read. The big question is about the nature of Christianity and what God is seeking from those who believe. One of the issues raised is the question of whether religion is about the implementation of ethical standards or developing a relationship with the loving God.

The difficulty with seeing it as an ethical standard is that if people don't live up to the standard then they are judged and discarded. There is also the issue of where people gain the strength to live up to some of the exacting standards expected of them. Enforcing an ethical standard is difficult. Finding a balance between judging people and encouraging them within the ethical framework becomes a major issue.

Contrast this with developing a relationship with the loving God. If you are interested in building a relationship then you will overlook some of the behavioural issues. The authors clearly present the idea that God wants relationship and not rule following. They would contend that as we grow in relationship with him, we will find ourselves changing to take on more of his characteristics.

For me, the book carries a secondary message and that is communicating our vision and dreams to others. In the story, Jake is encouraged in his walk by a person called John. John never tells Jake what to do but he seems to always have the right questions to ask and thought provoking challenges to Jake's thinking. Those challenges and questions are always given when Jake is ready to accept them. How often have you been battered by someone who simply believes that you should simply accept all that they say. Their one desire is to have you do what they want. No doubt John believed that there would be a certain outcome to this journey but he also knew that the journey had to be taken at a pace that Jake could handle.

I recommend reading this book although you may not like some of the views expressed especially if you believe that church is an institution.

Reference

Jake Colsen (2006) So you don't want to go to church anymore. Los Angles: Windblown Media.

Saturday, 22 November 2008

FluxUnit-ing Flex programming

In last week's blog, I talked about a number of different testing frameworks for doing test-driven development (TDD) in Flex. This week, I want to continuing the investigation of FluxUnit and how it can be used to do behaviour-driven development (BDD) type scenarios specifications.

In behaviour-driven development, they talk of writing stories and scenarios. The story describes the stakeholder, the feature wanted, and the benefits. The scenarios provide specific examples of the stories. In general terms, a scenario might be written in the form:
Scenario:

   Given [pre-condition]]
When I [action]
Then I expect [post-condition]

For any scenario, there may be more than one given, when, and then combination. For any given, there might be multiple when and then combinations.

Translating this to a testing tool means that I want some method of ensuring that the given pre-condition is created before executing the when action. Once the action is completed, there needs to be a way of testing that the post condition applies. Finally, there is a need to clean up after to ensure that there is nothing hanging around from this test that might interfere with the next test.

For a testing framework like FlexUnit, JUnit, or NUnit, The setup method provides a way of setting up the precondition so that it can be reused in a number of tests. The test method has to perform the when action and the testing of the post-conditions. The teardown method provides a clean up mechanism. Of course, you can be lazy and put everything in the test method but that limits the reuse of the pre-condition set up logic.

A first look at FlexUnit and its describe, before, it, and after functions and you wonder how can this match the BDD structure. My approach isn't the only method but it does provide a clean structure and the output from the tests reads like a BDD specification. Of course, I could change or extend FluxUnit's code to give me the labels that I would prefer but my goal isn't to write a new tool. I simply want to use what is already available.

describe('Verify the operation of clone and equals methods', function():void {
describe('Given a period', function():void {
var basePeriod : CallRatePeriod;
before(function():void {
basePeriod = new CallRatePeriod(4, 7, 10, 48, "WeekEnd");
});
describe('When cloned', function(): void {
var clonedPeriod : CallRatePeriod;
before(function():void {
clonedPeriod = basePeriod.clone();
});
it("should find the two periods are equal", function():void {
expect(basePeriod.startDay).to(equal(),clonedPeriod.startDay);
expect(basePeriod.equals(clonedPeriod)).to(be_true());
});
});
});
});

Note: I have left out the code that goes around this that sets up the FluxUnit environment. The first describe simply acts as a description of the scenario. It encloses all the Given, when, and then combinations that need to be satisfied. The first nested describe states the precondition (the Given). The before is required so that this logic is executed in the right sequence. They aren't essential but in the way that FluxUnit works, any code outside the call to the before method is executed as FluxUnit builds the execution sequence while the before code is executed once the build is complete. This can leave things undefined or containing null values. Using the before method is preferred. The describe nested within the given describe provides the when action. Again the actual action should be placed in a before call. Finally, the it call defines the post-condition tests.

The output from FluxUnit looks like:

The test entered in each of the describes and the it form the basis of the output. In this case giving a clear statement of the scenario, the pre-condition, the action, and the expected post-condition.

If I had used multiple when describe methods then I may have needed to introduce an after method. It wasn't necessary in this case. Multiple it method calls are also possible but you need to decide whether you want to report each post-condition separately.

If I was to enhance FluxUnit, I would change the describes so that the methods that I was calling were scenario, give, when, and then. This would make it clearer exactly what I was trying to do. The other issues that I have with the way that the code is executed are more than likely the result of using Flex rather than a result of the author's design.

The only reason that I will not be using FluxUnit on my current project is that I need a testing framework that will work in a Maven build context. Building the test program would work but I am not sure that it produces the required output so that the available Maven plugins can determine whether the tests all passed. This is on the list for further investigation.

Saturday, 15 November 2008

Flex testing frameworks

Over a month ago, I was asked to do a Flex programming exercise to demonstrate my ability to program. They specifically asked me to use the FexUnit testing framework, and the Carngorm model-view-controller (MVC) framework. In my usual way of investigation, I explored the range of possibilities for testing frameworks for Flex. I did look up the MVC and MVP (model-view-presenter) options but only experimented with Cairngorm.

There are a lot of what I would call “me too” testing frameworks available. Some are little more than announcements of intent to produce one. The one's, that I will review here, were selected because they indicated some unique features rather than saying that they simply implemented the JUnit style framework. The frameworks looked at included FlexUnit, fluint, FUnit, and FluxUnit.

What are my credentials for being able to do such an evaluation? Some thirty plus years ago, I started my career as a programmer in the computer industry. One of my early projects was to write a terminal simulator so the programmers could test their programs without having to have access to the networked terminals. On the same project, I was also involved in building what might now be called an integration tool. The team that I worked in was responsible for support routines that needed to be tested and released for the application programmers.

Over the last nine years as a lecturer in a university, I have been working with and teaching test-driven approaches to programming using both JUnit and NUnit. When I start a new project, one of the first things that I look for are testing frameworks for the languages that I will be using. Also for the DotNET environment, I wrote my own database test unit (DBUnit) and a Forms testing extension for NUnit as a result, I feel that I can talk with some confidence about testing frameworks and what they have to offer.

FlexUnit

I am going to use Adobe FlexUnit as the base testing framework since it is distributed by Adobe through their open source sight and appears to be fairly widely used and supported in the Flex community. It, along with dpUnit, the forerunner of FlUnit, was the first testing framework that I experimented with in the Flex environment.

FlexUnit is based on the original JUnit / SUnit (Smalltalk testing framework) design. It doesn't use reflection but requires the programmer to build suites of tests that are then passed over to the runner. This management code is tedious to write and it is easy to miss adding a test to the suite. A test can be another suite of tests or a refrence to a specific test. You have to be disciplined to ensure that you set up this management code correctly. The usual practice is to put a class method called suite at the front of all the classes that implement tests (i.e. extenf the TestCase class). This method simply adds the tests to a suite that is returned to the caller (i.e. 'theTesSuite.addTest(new testClass(“testName”);'). You then create one or more suites in a hierarchy that enable all the suites to be passed in one batch to the runner.

 var testSuite: TestSuite = new TestSuite();
testSuite.addTest(testCase.suite());
testSuite.addTest(testcase2.suite());
testRunner.test = testSuite;
testRunner.startTest();

For those used to the early versions of JUnit, this conforms to the practice that was then required to set up the test suites.

The avalable asserts also follow the original pattern of assertEquals, assertNotEqual, etc. These are all provided in the TestCase class which you inherit into your own test classes.

There is also available a FlexUnit Ant tasks and the Flex Mojos that can be used to integrate FlexUnit tests into an Ant build or Maven build process. I haven't tested the Ant tasks but have used the Flex Mojos to build and run tests.

Fluint

This was originally called dpUnit and is developed by digital primates IT Consulting Group. Fluint claims a number of advantages over FlexUnit including support for asynchronous tests and the testing of user interface components. At this point in my experiments, I haven't used these advanced features but they are on my list of things to experiment with.

In many respects, fluint follows the example of FlexUnit. For basic testing, one of its advantages is that you don't have to create a suite that contains the individual tests. It uses reflection to identify the tests looking for method names that begin with test or the [Test] attribute on a method. My experience so far indicates that the use of attributes hasn't extended to the test class (TestFixture) or the setUp and tearDown methods. Experiments are ongoing.

Fluint also uses a slightly different approach to building the base suite.

 var suiteArray:Array = new Array();
suiteArray.push(new testSuite());
testSuite.addTestCase(new testCase());
testRunner.startTests( suiteArray);

In many respects, this is cleaner than the process used for FlexUnit. In terms of the base assertions used by fluint, they are identical to FlexUnit. The asynchronous testing seems to be a new set of methods that don't fit the traditional assert structure. I will report on those once I have conducted some experiments.

Fluint also provides Ant tasks and with Flex Mojos abilty to define your own test runner, I suspect that it would be possible to integrate fluint into a Maven build since the Flex Mojos use the FlexUnit Ant tasks to run the tests and Maven allows Ant tasks to be integrated into the build.

At the basic level, fluint, simply provides a different way of setting up the test suites. If it has a real advantage, it has to be in the support that it provides for asynchronous and user interface testing. You may have noticed that the name isn't FlUnit but fluint. The authors argue that it is aimed at integration testing.

FUnit

FUnit follows the more recent versions of NUnit in the way that it identifies tests and in the structure of its asserts. It uses a tag-based attribute model to identify test classes and the tests that they contain. This means that there is less set up coding that needs to be done to make set up the test suites. The test classes still need to be added to a suite but not the individual tests. I am not going to illustrate its coding pattern here as I don't think it differs enough for fluint.

It has one major drawback and this is that in its current version the GUI interface doesn't run the tests. It has to be run through a console runner and as at this writing, the only way that I have been able to view the console is running the test program in debug mode within the Flex Builder. Not exactly helpful. However, the next version of the Flex Mojos will support the running of FUnit tests in a Maven build so it may be useful in that context. What is available in the library presents a nice user interface but doesn't report on the tests. Once they have this implemented, this could be a good package.

The assert structure in FUnit follows the more recent structure introduced in NUnit. That is it uses an Assert class with static methods (i.e. Assert.areEqual()). This has the potential of allowing the user to easily extend the conditions that are available or even the more recent enhancements that have been introduced in NUnit.

FluxUnit

This claims to be a behaviour-driven (BDD) framework rather than a test-driven (TDD) framework. As a consequence, the way that it implements its tests is very different to the other suites covered in this blog.

FluxUnit uses describe, before, it, and after methods and dynamic classes. I am relatively new to Flex so some of the coding patterns used in FluxUnit are novel to me and I will not attempt to explain what they are doing. The code to run the tests is as follows. Note that the creation of the object containing the tests isn't explicitly assigned to anything.

     Flux.setRoot(body);
new CRFluxPeriodOverlap();
Flux.Run();

In terms of defining a test, this looks quite messy to start with but become easier as you gain experience.

     public dynamic class firstSpecs
{
public function firstSpecs()
{
var self: firstSpecs = this;
new Flux(this).Unit(function():void {
with (self) {
describe('initial state', function():void {
before(function():void {
// setup code;
});
it("expected result", function():void {
expect(varaible).to(equal(),value);
});
after((function():void {
// clean up code
});
});
}
});
}
}

This appears to work by defining dynamic functions. It is possible to nest describes and to have before and afters that are global to all the describes. The user interface reporting is also interesting since it uses the text from the describe and it functions to structure the output.

The authors have an example of using ot for asynchronous testing but not for user interface testing. There is also no reference to Ant tasks or ability to work with Maven Mojos.

Although I like the terminology used in behaviour-driven development, I still find the tools uncomfortable to use. At least this one is workable if you apply a simple template. It is however, to misplace the bracketing and take some time to get it corrected.

Conclusion

At the basic functionality level, there is little difference between any of these frameworks. FlexUnit appears to be a proven framework and is able to be used in Ant and Maven builds. FUnit appears to have Maven support and possibly also could be used in Ant builds but its lack of a GUI is frustrating. Fluint has the additional functionality for asynchronous tests and user interface component tests. It also has Ant support and may be easily integrated into a Maven build. FluxUnit provides more readable output.

My personal order of preference at this point is fluint and then FlexUnit. I would use FUnit ahead of FlexUnit if the graphical user interface worked. I intend to keep experimenting with FluxUnit simply because I like the BDD style but until I see support for Ant and Maven builds, I wouldn't use it on a project. I would also like to see the coding structure improved so that it was easier to set up and use.

Saturday, 8 November 2008

Sabbaticals and calling

In the process of looking for paid employment, I picked up a book written by MacKenzie and Kirkland (2002). In a chapter on sabbaticals, they talk of the need for a time of rest; taking time out from the normal routine to be restored and to re-evaluate the direction of our lives.

Supposedly, the time that we had away last year was a sabbatical but it was filled up with the normal routine of research (working on the PhD). It certainly had some elements of rest and restoration but it wasn't the break from routine that enables real refreshing.

Now, as I work in industry, I am feeling like I am having a sabbatical. The work is different and so are the pressures. I am seeing my academic work in a new context and considering how I might apply all that I have learnt in a new context.

Another chapter in their book focuses on calling. Here they make an interesting observation that our call isn't to a task. Rather it is to a relationship with God and with those around us. It is living out that relationship in whatever context we work that really illustrates our calling.

In a Christian context, a call is often associated with entering Christian work but MacKenzie and Kirkland are emphasising that every Christian is called. The way that we work out our calling will show in the way that we do our work and interact with those around us.

Reference:

MacKenzie and Kirkland (2002) Where's God on Monday? Christchurch: NavPress.

Agile theory versus agile practice

After 17 years working as a lecturer, I have returned to industry as a developer. I return having endeavoured to encourage students to use test-driven approaches to software development and having completed research on the ways that software practitioners express their understanding of object-oriented software development. My knowledge of the implementation stacks is more theoretical than practical although in developing my teaching samples, I endeavoured to follow agile practice. The bulk of my code is has unit tests.

In the process of obtaining this contract, I had to do a programming exercise. While doing that exercise, I learnt again about doing small incremental changes with your tests. If a test does too much then it becomes too difficult to implement. If you find yourself going to the debugger to determine the cause of the problem then the test is probably asking for too big a change. Small incremental changes can be implemented rapidly and tested quickly.

One week into my assignment and I am thinking about the challenges that lie ahead. This week has seen me learning about the system that I am to work on and installing the development environment on the computer I am to use. The development environment uses the Maven build tool and a source control system, and the appropriate testing frameworks are installed. Having the tools installed doesn't lead to an agile process. One of my first observations was that for the code that I am to work with, there are no tests. I will primarily be working with Flex on user interface issues but there is a lot of functionality in these user interfaces and I see no tests.

As I look at the physical environment with its individual workstations, I wonder whether an agile process will work in this environment. Agility is a mindset and not an environment. Why don't programmers write tests first? It comes down to the way that they perceive or understand the programming task. For me, an upfront test or behaviour specification isn't about proving that my code works. It is an executable specification of what the code is expected to do. The first part of any programming exercise is understanding the requirements. If I can turn those requirements into executable tests then not only am I showing that I understand what is required, I am also showing that I understand how I will verify that I have satisfied those requirements. Whether I use TDD or BDD to drive the development of software, the key thought is that I am writing executable specifications that will verify that the code I am about to write does what it is supposed to do.

To write an executable specification, I have to make design decisions about the implemented solution. I have to decide what objects I will need and what behaviour those objects will implement. I need to think about the architecture of the system and how that architecture might change as new requirements are specified. The executable specifications will help ensure that my changed architecture still delivers what is required.

I don't see the writing of executable specifications as that different from what I might have done thirty years ago when I was COBOL programming. When I wrote code, I was always thinking about how I was going to test the code. Some time those tests where automated but there was always a need to ensure that the data was there in order to test out the required features of the software. What I see different about my practice now is that I am creating executable specifications at the first opportunity rather than leaving things until I have some code to test. I am not so much writing the specifications to test my code as writing them to reflect my understanding of what is required. I hope that this is reflected in the names or descriptions that I give my executable specifications.

Although the TDD or BDD tools might help me create the executable specification, it is really the mindset that I bring to the task that determines whether I apply agile practices. My conception of the task will dictate how I approach the task. If I want agility in my practice then agility has to drive my thought processes.

Friday, 31 October 2008

Finding ones roots

Last year when we travelled through Ireland, we visited Dungiven and found what was left of farm buildings on a farm that we believed might have belonged to my Thompson great-grandparents. I wrote to a historian of the Dungiven and Feeny area and received a letter back this week confirming that the farm had belonged to my great-grandparents. With my grandmother Thompson (nee McAteer) also originating from Northern Ireland (Port Glennone), I am feeling quite entitled to claim my Irish heritage.

The house should be here but it seems to have vanished long ago. The garden gate remains and across the road some farm sheds. However, the view to the local hill is really good.

Saturday, 18 October 2008

Go forward

After months of drafting and redrafting, my PhD thesis is finally to be submitted. I now have all the sign offs and the thesis printed ready for examination. I will be delivering it on Wednesday on my way to the National Cycling Road Championships. Do I think it is a perfect thesis? I doubt whether there is such a thing. Could it be better? I suspect that it could be better but you can't go on revising for ever. After Wednesday, we wait for the examiners to read and comment on it.

Also on the positive news front, I signed a contract for a six month software development role. So after almost 10 months of dedication to the completion of the thesis, it is now a return to a software development role. We had decided to give up active job hunting until the thesis was completed but we were approached by an agency about three weeks ago to see whether I would be interested in this role. We decided that we would express interest but wouldn't place a lot of confidence in getting the position.

Both the completion of the thesis and finding employment have been major struggles over the last few months. At times, it would have been easier to walk away specially from the thesis but there always seemed to be the message to keep moving forward. We had come too far to turn back. There is still a lot of uncertainty on the path ahead but the message is clear that we keep moving forward and allow things to run their course.

Saturday, 11 October 2008

Another Panorama

After a lot of experimentation and learning, we have created another 360 panorama. This one was taken in the Wellington Botanical Gardens. It is one of a sequence of three. However, the open source display tool that we are using seems to be difficult to configure so only one is available for view at present.

Click on the image to view the panorama.

Friday, 10 October 2008

What does God have to do to catch our attention?

Ellison's (1982) contends that the real victims of the plagues brought upon Egypt were Egypt's nature gods. They were increasingly shown to be powerless to protect their worshippers. They could not stand before Israel's God, Yahweh.

But why should God cause such destruction? Weren't there easier ways for him to show Israel that they were his people and that he would not back down on his promises?

I doubt whether I could answer that question satisfactorily but I can see that in our current times, God is having trouble catching our attention. Increasingly God or the notion of a god is being discarded in modern society. We are able to cope and build our own future but is that really true.

What did we learn from the 9/11 attack on the twin towers? Is there a message for us in the current collapse of the financial markets?

Surely the 9/11 attack shows us that we are not building harmony in the world. Some groups still feel disadvantaged and that their way of life is under threat. Groups which are called terrorists are able to recruit in western nations from people who feel oppressed or forgotten by the western systems. Is it possible that in the so called terrorist activity that there is a message for us about the need to hear and change? In arrogance, western leaders have stood up and condemned the attacks and sought to bring to judgement any who might be linked with the terrorist groups. Where is the desire to hear the pain and hurt of people who feel rejected and neglected? There is a hardening of the heart to stand against all that might threaten the western way of life.

Now there is the financial meltdown. This isn't an attack from outside but rather an attack from within. The western financial systems were superior to any other options. They brought wealth and independence. Now their foundations are shaking and the attempts to bail them out are struggling to make a difference. Governments speak of needing to support the system because our survival depends on these systems. If the financial systems collapse, the western style of life is under threat once more.

Are we seeing misplaced confidence in man made systems? Are we seeing just how much we need to be shaken? Is God trying to deliver us a message and are we not hearing?

Egypt depended on its nature gods. They were challenged and found wanting. Israel time and time again needed to be challenged to hear God as it became comfortable and took on the ways of the people that surrounded it.

God continues to try and get our attention but are we listening. What have we placed our confidence in instead of trusting in him and hearing the path that he would have us take? How many gods have we put in place that need to be shown as ineffective before we will turn again and listen to God's message?

The prophets may not be calling from the roof tops warning us of dangers of the path that we have chosen but the signs are there that we have misplaced our faith and that we need to hear again God's call for our lives.

Reference

Ellison, H. L. (1982). Exodus. Edinburgh: The Saint Andrews Press.

Saturday, 4 October 2008

Yahweh

Ellison (1982) translates the word as “I shall be that (what) I shall be.” This is quite different to the normal translation of “I am who I am.” “I am who I am” places an emphasis on here and now. God is who he is and we shouldn't question it. Ellison's translation presents an interesting challenge. “I shall be that (what) I shall be” is giving a future tense to God's name. It is looking forward to what God will do and accomplish for his people.

In the context of this passage (Exodus 6:2-27), it is this future hope that the people of Israel need to trust in. They and their forbearers have received promises that have remained unfulfilled. Now the people are forced labour under the Egyptians. This surely isn't the fulfilment of the promise of being God's people. There would seem to be valid reasons for disappointment in God and being called “I am who I am” wouldn't carry a great message of hope.

But “I shall be that (what) I shall be” is forward looking. It is raising the possibility that God will act and restore hope for his people.

As I write this, I am coming to the end of the writing of my PhD and although I have applied for positions, nothing has materialised as a job offer. It isn't the first time in my life that a change in direction in my career has occurred. Last time, it felt like all that I had worked for and done counted for nothing. Industry said that I could not change from being a mainframe programmer to programming PCs. This was despite having just moved into supporting IBM AS400s. Now, the message from industry is that I don't have enough current experience of the technologies and that I am over qualified for a position that would enable me to get up to speed with the technologies.

Trust in man give me no confidence but trust in a god who will be what he will be gives me hope. His plan and promise will be worked out even if I can't see it yet. I simply need to place my trust in him.

Reference

Ellison, H. L. (1982). Exodus. Edinburgh: The Saint Andrews Press.

Friday, 26 September 2008

Completing a PhD

I always thought that the objective of universities was to foster creative, innovative, and conceptual thinkers. As I near the end of my PhD, I feel as though all of my innovative and creative thoughts have been driven out of me but not into the thesis. Instead, I am being told just to do as my supervisor says and don't try anything new. A PhD is about being certified as a researcher in a particular research paradigm. Learn the rules of the paradigm and don't shake the boat.

In my teaching, my objective has been to develop people who can think through the issues and reason their way to solutions based on having a sound conceptual framework. If as a result of this learning, the person challenges my thinking or takes a different approach based on sound reasoning then I should be pleased since they are showing the initiative that I am trying to foster.

The closer I get to the end of my thesis, the more I feel that I am trying to guess what my supervisor wants and want the examiners might want. What those objectives are become clearer with each interaction. With each lot of feedback, I realise just how much time I have wasted by being a free thinker and by not having allowed my supervisor to dictate the steps to this point.

Here is my advise to a new student on how they should approach a thesis. I would recommend that you:

  • learn what is of interest to your supervisor,
  • learn how they approach their research,
  • seek to find a way to extend their work or to pursue something that is what your supervisor want to explore,
  • follow a research method that your supervisor would use,
  • uncover the reasoning that your supervisor has for the research questions and research method,
  • follow the structure that your supervisor expects when writing it up,
  • learn your supervisor's approach to writing by writing papers with them along the way and reading all the papers that they have had some influence in writing, and
  • don't pursue anything that you might be interested in and your supervisor has no interest in.

The bulk of your learning is related to learning about your supervisor and their methods. In Thomas Kuhn's (1996) terminology, you are learning the research paradigm and becoming indoctrinated in the ways of that paradigm. Thomas Kuhn argues that a student learns the methods of the paradigm and what is or is not acceptable for the paradigm. They are not expected to challenge the paradigm or to draw on alternative paradigms. The student is to conform in order to join the community. If they don't conform then they will not receive the accreditation to become part of that community.

The bottom line is to ignore any thoughts that you might have that challenge the paradigm. Just accept it and use it. Once you are indoctrinated then you can indoctrinate others.

Reference

Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago: University of Chicago.

Saturday, 20 September 2008

Panorama Stitching

Since my last report on stitching panoramas, I have been out and taken a number of new sequences. I am discovering a range of issues that influence the quality of the final panorama. Some of these relate to the technique used in taking the images that compose the final image. Other relate to the stitching process itself.

When I go out to take an image, I now try and set the camera to manual for all settings. The reason is quite simple. Any change in focus or exposure can cause problems in the final stitched image. The same applies for adjustments in the white balance. I had mainly thought that this applied when taking photos under artificial lighting when the light temperature is lower and you are left with an orange tinge in the photos. However when taking a sequence of photos that make up the spherical image, the camera if set on auto white balance makes decisions about the nature of the light temperature and adjusts the image colours as it believes are required. The solution is simple. Never leave anything to be set automatically by the camera. Take control and set everything yourself.

Some of the stitching programs can compensate for some of these problems but not all. If the image is out of focus then it is out of focus. No attempt to improve it using software will resolve the problem. A change in focal length can be handled by a stitching program such as Hugin but if the change is too great then you might have gaps between the images. The stitching program can't full in the gaps. The panotools that are used by Hugin will also attempt to correct exposure problems by attempting to match the histogram of the the portions that overlap and adjusting one of the images to that revised exposure. However, this can lead to other problems. In effect, the less patch up work that has to be done in the processing work flow, the better.

Adjusting the camera settings to manual doesn't solve the change in light conditions that occurs when taking an image while the sun is setting (see the Days Bay wharf panorama). As the light fades, there is a need to adjust for the change in tones caused by the change in the amount of light available and possibly the change in colour of the light (i.e. a change in the light temperature). This is difficult to achieve manually and the automatic options simply compound the problem. With the Days Bay wharf sunset, the problem can be seen in the nadir image at the top of the sphere. It simply doesn't seem to match. I need to explore the blending options because I am not sure that these have been applied in a way that might resolve the problem.

The second issue in taking these images are movements in the camera positioning. I ensure that the tripod is level and endeavour to ensure that it won't move but even leaving your hand resting on the camera can change the camera angle by one or two degrees. This seems to have more impact when this angle change in the vertical dimension. Hugin provides a facility to find control points but the default option failed to find points between layers. There are also alternative programs that can be used from within Hugin and it can be time consuming testing each of these options. You also land up with a clutter of files that contain the definitions of the control points between the images.

As someone who wants to focus on the photography and not the processing after the event, I find this quite frustrating.

The long wait between drinks

It can be difficult waiting for a clear direction in terms of how one should use your abilities and understanding. We are being tested in this respect at the moment.

I went through the interview process for one position, completing a number of programming exercises along the way. I was informed that I was next on their list to be employed because I had exceeded their expectations. However, contractual requirements meant they needed client approval to expand the team. That approval wasn't given so I was informed that the position was no longer there but that they would act as a referee for me.

Since then, I have been through another two interviews for possible positions with another organisation. Again not an outright 'no' but rather there is no position available at this time that matches your talents. They indicated that they would advise me if anything did come up that might be suitable. Maybe the economic uncertainty is beginning to impact some of these organisations.

Each of these is a blow to my confidence. Sure I lack experience in the use of some of the tools currently used in industry but when given the opportunity, I have usually exceeded expectations including sometimes my own. In the interview process, I don't hide the fact that I haven't used some of the tools but it does frustrate me that the usage of specific tools has priority over conceptual understanding but this is getting a little off topic.

The PhD thesis has now reached a point where there is a complete draft. It is oversize by a little over 11,000 words. Major revisions of the larger sections is required if I am to get it down to the 100,000 word limit.

I have things to do that occupy my time but the issue is that they don't help the cash flow. That concerns me but I sense that I am supposed to be where I am and doing what I am doing especially with regards to writing the thesis.

There is another piece of uncertainty in this equation in that I am unsure of what I am supposed to do next. I enjoy developing software especially when I can focus on problems to do with providing the functionality. No so keen any more on going through all the set up requirements to put in place a build system or to make the tool set work. Both in the programming environments and in the panorama stitching, I see that we haven't learnt to resolve the integration problems. We still expect the user of the tools to do the integration of the tools set and to locate the components that need to work together for the task. Maybe this says something about where I am at in my thinking with respect to my own role.

The other thing that I enjoy doing is helping others learn and discovering whether the approach that I am using is really helping the other person learn what I expected them to learn.

The problem seems to be that there are few positions available that allow me to focus on these two aspects without consuming more time than I have available.

Sometimes, we simply have to trust that things will work out and not panic when solutions don't seem to be appearing in the way that we anticipate.

Sunday, 14 September 2008

Web site development

Over the last month, my primary focus has still be writing the thesis. At the end of last week, a full draft of the thesis was completed. There has also been an emphasis on finding work. This saw me having to complete a programming project using Flex as reported on 23 August. Over the weekend, I returned to building our own website (www.thompsonz.net).

The site was originally built using a template design by Andreas Viklund (http://andreasviklund.com). The difficulty was that the common items such as the menu was on every page leading to difficulties with maintenance. When I taught a course on the design of web-based information systems at the beginning of 2007, I started building an XML based version of the site to overcome this maintenance problem. With last year's trip to Finland and the pressures of the thesis, the completion of the project was delayed but this weekend, I finally managed to convert part of it to being generated from XML. The outstanding issues proved easy to resolve and now the focus can shift to putting up some content.

Not directly linked to the project to translate to XML was putting up a couple of photo galleries. This has been done using Adobe Lightroom and its Flash gallery template. Having learnt Flex programming, the next goal is to create our own Flash / Flex based galleries.

When I taught the course, I endeavoured to get the students thinking about the separations between content, structure and format. This also included separating common content from the specific content of a particular page. The diagram that I used to try and explain my objective is on the right.

It is this usage of XML and XSLT that I have used for our website. The schema (XSD) provides the rules for structuring the content (XML). I created an common content template (XSLT) that also transforms the XML to HTML. I am using XSLT version 2. The actual formatting is controlled by a base style sheet (CSS). There are a couple of subsidiary style sheets that are used to give certain groups of pages their own style. When transformed, I end up with the HTML pages that are the website.

I discovered that Microsoft's Internet Explorer will accept the XML file as input and initiate the transformation. It then displays the file as expected. Unfortunately Firefox and Google Chrome look like they do the transformation but they don't format using the style sheet (CSS). I wonder whether this is because the style sheet is inserted by the transformation. A small problem that I will explore later but since it is fairly easy to generate the pages using the Oxygen XML editor, I don't need to do the translation on the server or in the browser. This may change as the number of pages making up the website increases.

The other discovery is that Microsoft's Internet Explorer needs and DocType declaration before it will display the generated file correctly. Firefox and Google Chrome don't. However, ensuring that you have a valid DocType statement isn't that difficult.

Overall, another good learning exercise that has provided an easier to maintain website without purchasing a lot of expensive tools.

Saturday, 30 August 2008

Seasons

“God knows what He's about. If He has you sidelined, out of action for a while, He knows what He's doing. You just stay faithful – stay flexible – stay available – stay humble.” Charles Swindoll (1994) Growing strong in the seasons, p 531.

Over the last eight months, I have primarily been focussed on the writing of my PhD on practitioner perceptions of object-oriented programming. It has been a difficult road as we have learnt to survive on my wife's income. During that time, I have applied for positions in academia and in industry but so far there has been no offers for employment. Now as the cash reserves from the redundancy payout dry up, the feeling of having been sidelined grows stronger.

It would be nice to argue that my faith has remained strong throughout, that I have never once doubted that this is where I should be and what I should be doing. But that isn't true. I have wondered whether I have lost touch with those programming skills that I used to have and whether I would ever be teaching again. The sense of being sidelined forever has sometimes been overwhelming. That isn't helped when you have spent weeks writing a chapter only to have it pulled apart by your supervisors and you are back redrafting and hoping that this time, you are finally writing what they think is wanted.

This is where Swindoll's quote offers reassurance. God knows what he is doing and we simply need to remain faithful, humble, and available. In his time, he will open the path ahead and we will see more clearly what he has planned for us. As Jeremiah 29:11 says "For I know the plans I have for you," declares the Lord, "plans to prosper you and not to harm you, plans to give you hope and a future." That message was delivered to Jeremiah when Israel was in exile and looking like it had no future.

Power and Love

Barclay (1975), in a commentary on Mark 14:17-21, says “Here is the whole human situation. God has given us wills that are free. His love appeals to us. His truth warns us. But there is no compulsion. We hold the responsibility that we can spurn the appeal of God's love and disregard the warning of his voice. In the end there is no one but ourselves responsible for our sins” (p 391).

Barclay makes this comment in response to a passage where Jesus openly speaks of one of the disciples betraying him. Jesus never says which one. Barclay says that Jesus could have stopped Judas or if he had identified him, the other disciples would have taken action to stop Judas. But that isn't God's way. God doesn't force his will upon us. He appeals to us to follow the path that he desires for us. In the end, the choice is ours.

The consequences of not hearing God's appeal are all around us. As Barclay says we are responsible for our sins. Without that being the case, we would not learn or turn away from sin.

If God in his power continually moved to stop us or to cover over the consequences, why should we stop doing things that have the potential for disastrous consequences. Those consequences would never happen so the actions no longer matter.

In Barclay's commentary on Mark 13:3-6, 21-23, he talks about the antinomians who believed in nothing but grace. The law was abolished. To go on sinning was to ensure that God's grace continued to grow. If god stepped in to stop the consequences of our actions then we could argue the same. Why stop if it allows God to show more of his power. It doesn't matter what we do God will always stop us. We lose our responsibility for our actions.

Love seeks us to follow but it does not force us to do do even if it is within the power of the lover to do so.

Reference:

Barclay, W. (1975). The gospel of Mark (revised ed.). Edinburgh: The Saint Andrews Press.

Saturday, 23 August 2008

Photography

Two things have helped the photography effort this week. These are the the Kererü, the New Zealand native wood pigeon has been back in the trees around our house and the second has been the snow dumps on the Rimutaka ranges just north of the Hutt valley. The ranges are visible from different points around the Hutt valley and Wellington.

The Kererü are fairly large birds so that are not easily mistaken. We have the additional advantage that they like the Köwhai trees along our neighbours drive and the buds on our rather poor specimen of a pear tree. Te Papa in conjunction with the University of Victoria run a Kererü discovery project in which they encourage people to report sightings and send in photos. We have up to five birds visiting each day at present and on Thursday, I managed to capture some photos of the birds in flight. Usually when I see them they are up high in the trees and surrounded by foliage. This doesn't lead to very good photos although I do have some that have made it to the Kererü Discovery pages.


Also on Thursday afternoon, it was a beautiful clear sky so I took advantage to use my spherical panoramic tripod head to capture some panoramas of the snow covered mountains. With the orange of the willows as they prepare to sprout leaves, some of the images when stitched are quite spectacular. None of these are 360° or spherical images. Although we do have some of those to share once we resolve the final stage in preparation.

Thesis writing

Thesis writing is my other primary activity at present. I am in the last stages turning my attention to the discussion and conclusions. I have to admit that although the subject matter was of interest nearly two years ago, I am finding it frustrating now as I try to write the results up in a way that initially satisfies my supervisors and later the examiners. Partly because my these changed direction over the period that I have been working on it and partly because I wasn't encouraged to write sections earlier, most of the thesis has been written over the last eight months. Some of the difficulty is writing an educational thesis when I have been more focused on technologies. The shift in thinking is quite large.

From a technology perspective, my thesis is looking at how practitioners perceive what object-oriented programming is about. This has involved looking at how they express what an object-oriented program is and how they talk about designing an object-oriented program. The method that I am applying to the research is based on phenomenography. The objective is to uncover an outcome space that details the different ways that a group of practitioners express their understanding of a phenomenon. The result is a set of categories of description that in theory covers all the ways expressed by the group. This isn't a statistical exercise. It is very much an analysis of texts. As well as practitioner interviews, I have examined a set of textbooks based on the categories arrived at from analysing the practitioner' interviews.

The educational perspective gave the basis for the study. I have taken a relational perspective of learning (Ramsden 2003) that argues that the way that a learner approaches a learning task is dependent on their perception of the task. Some would call it the task representation. From a phenomenographic perspective, learning occurs when there is a change of conception of a phenomenon (Marton and Booth 1997). Hence if we understand the way that people perceive a particular phenomenon and the variations in critical aspects that make up that perception then we may be able to plan teaching to ensure that the perceptions that cause the greatest learning are those that are visible to the student and plan a pathway to those perceptions from those currently held by the learner.

My original plan was to implement a teaching plan and gather data on the change in learner conceptions over the period of the course but that failed when student enrolments in our courses began to drop and finally when I took redundancy rather than a position as a tutor. Losing the context for the research hasn't helped my struggle to complete the writing. I enjoy tasks where I can see a practical application. Now, the thesis seems to be little more than an academic exercise which occasionally gives me some enthusiasm when I uncover a new insight.

If you are reading this and think that any of this might be useful to you then consider leaving a comment and maybe we can get in touch. Job offers in teaching programming will also be accepted gladly.

Reference:

Ramsden, P. (2003). Learning to teach in higher education (2nd ed.). London: Routledge Falmer.

Marton, F., & Booth, S. A. (1997). Learning and awareness. Mahwah, NJ: Lawrence Erlbaum Associates.

Flex Programming

The last week has seen me focus on completing a programming exercise using Flash and the Flex development tools. It is difficult to believe that after 17+ years in the computing industry programming on a wide range of equipment and in a range of languages, and 17+ years of teaching students to program at tertiary institutions, that I am having to prove that I am capable of programming and of learning new environments quickly. Still if that is what it takes then we will do it.

I wasn't overly confident to begin with as I have found JavaScript frustrating and Flex using a combination of an XML based language for developing the layout and ActionScript. ActionScript and JavaScript are very similar sharing a common standard of ECMAScript. The company also wanted me to use a particular model-view-controller (MVC) framework for the exercise so there was quite a bit to learn. They also mentioned a FlexUnit testing framework and as I don't go the simple path, I decided that I would use that as well especially as they claim to use agile methods and I prefer test-driven or behaviour-driven approaches.

Over the last week, I have downloaded Flex Builder, the frameworks, borrowed books from the library, and set about learning the language, environment, and frameworks. There are two testing frameworks so I compared both a selected the one that I found easier to use. The end result was a small data entry system that felt more like a desktop application rather than a web-based application.

Maybe using a solution like Flex would give me an easier road to the type of facilities that I want to offer on our website. Flash is one of the options for displaying the 360 and spherical images so it would integrate nicely. Watch this space.

One of the things that I did with this exercise was write journal entries on the experience. Others have done this and published them on the web. I think I still need to learn about writing up the details as the entries make sense to me but have few code examples.

Three significant learning issues came out of the experience.

  1. When I installed Flex builder, it told me that I needed to update the Flash Player but I was sure that I already had the latest version installed so declined. Well, I did have the latest player installed but not a version that incorporated the debugger. Without the debugger feature, I couldn't step through the code and the testing frameworks simply reported success of failure. They didn't provide me even with the expected and actual reporting line. Once I was frustrated enough to try and resolve why the debugger wouldn't work and reinstalled a version of the player that incorporated the debugger, not only was I able to trace execution and use breakpoints, the testing frameworks actually began to report expected and actuals, and gave a stack trace so I knew where the problems were occurring. This made a surprising difference to progress and reduced the levels of frustration. Lesson: If you are told an install that something is out of date then check what the problem is and resolve it before moving on.

  2. Flex Builder doesn't know how to do refactoring other than rename. This proved to be a real frustration as I endeavoured to extract methods or relocate items within the hierarchy. Refactoring and test-driven development go hand-in-hand. Not having one part of the combination didn't lead to good code design. I recognised that I was already too eager to move on to implementing new functionality. Not having the refactoring tools just added to my resistance to get in and improve the code design before moving on to the next feature. I did find that I was fairly naturally looking for ways to reuse already written code and I was a little frustrated that I didn't solve this with a couple of my form panel layouts that only differ by the source of the data in the data provider tag. Moving on, they may differ by more than that but since this was a proof of concept, well proof of skill, exercise that may never happen. Lesson: Ensure that you refactor and look for support in the tools.

  3. I have often heard questions asked in programming forums about how big a step should a test represent. During this process, I often thought that my next test was simply a small step but once I started coding, I found myself running into problems. More than once, I went back and wrote another test that simplified the expected results. Guess what, that often resulted in a much smaller amount of code needing to be added or modified in that step. I am beginning to think that if a test requires more than a couple of lines of code to be added to the production code then the step is too large. Certainly, don't make a test that requires too many variations to be checked to ensure that it works. Guess what! It is likely to cause grief. Part of this project was about ensure a space was covered by a number of rectangles but that none of the rectangles overlapped. The place to start is with a rectangle that covers the whole area then look at two that cover the whole area. When you start looking at overlapping then again avoid having any uncovered area and having multiple overlaps until the basic functionality is there. Grow complexity in small steps. The smaller the better. The debugger becomes a thing of the past. Lesson: Ensure that tests increase the program complexity in as small a step as possible.

I hope this proves helpful to someone.

Saturday, 16 August 2008

Zero sum games?

This blog is stimulated by reading Mark 12:28-34 and Barclay's commentary (1975). There Jesus is responding to a question by a scribe as to “What is the first commandment?” Jesus response is to love the Lord your God with all your heart and your neighbour as yourself? What I want to talk about isn't so much what it means to love God or our neighbour but rather the attitudes that we develop in a society aimed at producing winners.

In part, we often look at things from a zero-sum perspective. A zero-sum describes a situation in which a participant's gain or loss is exactly balanced by the losses or gains of the other participant(s). That is there is never an overall growth so for someone to win others have to lose. Many games work on this basis. There is a fixed quantity of resource and some will gain while others lose.

The difficulty is that we take this idea of there being winners and losers into many of life's situations. We have to out play others in order to ensure that we get ahead. I used to work in a university were the emphasis was “publish or perish” but I discovered it wasn't simply publish, it was publish more than others in your department or college. You might be publishing on a regular basis but because others are publishing slightly more, you are the one under threat. It was a competition based on the number of papers published. Constant comparison and competition with little encouragement or support.

That to me isn't love your neighbour as yourself. I could describe other situations where competition is the primary objective and not encouragement and building up but I want to look at this from the perspective of game playing.

There was a game that we played with youth groups that was very simple but clearly illustrates some of the issues. The group was split into two teams and they were each given a small chart that told them how the scoring was done. They were instructed that the goal was to maximise the score. All the team had to do was chose either A or B. The scoring chart was:

Team

Both chose A

You chose A

They chose B

You chose B

They chose A

Both chose B

Your team

+1

-2

+2

-1

Other team

+1

+2

-2

-1

The objective of the game is purposely ambiguous. Life is a little like that. If maximising the score for your team is the objective and you want to win then you want to chose B and have the other team chose A. That way your team gains two points and the other team loses two points. Of course if they chose B as well, you both lose a point. If both chose A then both teams gain a point but the risk is that the other team will chose B and you will lose two points while they gain two. Between rounds, you send a negotiator to negotiate the play for the next round.

As long as teams focus on competition, either both teams will lose out. Both teams scores will decline. There seems to be no way to ensure that your team will increase its score. Only through cooperation can both teams improve their individual scores and by the nature of the game, the game score but can you trust the other team.

My argument here is that if we are to love our neighbour as we love ourselves then we need to be willing to take the risk. Cooperation and encouragement builds positive relationships and enhances productivity for all. The real questions is whether we are prepared to take the risk.

Reference:

Barclay, W. (1975). The gospel of Mark (revised ed.). Edinburgh: The Saint Andrews Press.

Saturday, 9 August 2008

Academic story

With the James Hargest High School reunion back in March, I decided that I needed to do some exploration of my school records. Not surprisingly, the school reports say that I would do better if I applied myself consistently. Now as I struggle to complete the writing of a PhD, I really wish that I had learnt to apply myself better at things that I didn't really like. One of those things is writing and now, I am working on a major writing effort. Back in my recollections on 29 March, I talked mainly of my involvement in sport. Except for involvement in motor sport, that died when I went to university but I didn't really focus on the academic work unless it inspired me or was of interest to me. That is how I ended up in computer science.

Not doing particularly well in my first year at university other than in the labs, I wasn't allowed to progress with my planned course in mechanical engineering. Having passed two maths papers, I took options that would allow me to progress toward the completion of a B.Sc. Computer science needed only two first year maths papers and I had those so I enrolled. The practical nature of the course with a focus on programming caught my interest and since the programming exercises supported the theory, I applied myself. The result was a B.Sc. and work in the computer industry.

As I reflect back, I realise that most of the subjects that I applied myself to involved some form of learning by doing. At high school, I focussed on maths because there were lots of interesting practical problems to solve such as plotting the path for an aircraft to get from one place to another in a cross wind.

Maths turned to theory at university but computer science had some interesting practical problems to solve and it drew on some of that maths knowledge built up through practical exercises. Theory for the sake of theory is a meaningless exercise for me so if I want to examine theory, I need to look for practical applications.

Now, after spending a number of years teaching in tertiary organisations, I argue that students need a conceptual foundation if they are to be able to be life long learners. Their conceptual framework lays the foundation from which they expand and develop new ideas. This leads to my current exploration of the ways that practitioners express their understanding of object-oriented programming. If I want to teach students to be good programmers then I need to know the possible conceptions that will help them to be good programmers and my learning exercises need to help them develop the appropriate conceptual frameworks in a way that grabs their attention and helps them to learn by doing.

But... and its a big but, I struggle with the academic writing. If I apply myself, I can write but I don't enjoy the exercise. I would prefer to be taking what I have learnt and applying it in a classroom. Yes, I would gather data that would help show that the teaching was effective but do I need to publish the results? To stay within the academic community, publishing is a requirement but publishing also serves another purpose. It puts your ideas out there for others to critique and provide their input. Again, I am not interested in critique that simply aims to pull apart. I am interested in critique that aims to suggest improvements that is it adds to what is being done in a productive way.

In 2006, I attended a pattern writing workshops at the OOPSLA conference in Portland, Oregon. What interested me was their approach to looking at what someone had written. The approach isn't particularly new but they apply it consistently. Richard Gabriel (2002) outlines their process including the submission, shepherding, and the sessions at the workshops. There are some key things that happen in the workshop sessions that I think need promoting in dealing with academic work everywhere.

First, the author reads a short portion of the paper that is the heart of what they would like the workshop to examine. The writer then sits back and listens to the conversations. The conversations start with summarising the work and then presenting some positive feedback. The reviewers, the other participants in the workshop, then make suggestions for improvement. This isn't pulling the work apart but are intended as pointers for improving the quality of the work. Once that is over, the author can re-enter the discussion seeking clarification of any of the suggestions for improvement. In effect, this is a learning opportunity for the author so that they can learn from others in the workshop and improve the quality of their work.

I would have love such support as I learn this academic writing process and struggle to complete my thesis. So much of the feedback that I am getting simply appears to be negative. It tells me what is wrong and leaves me guessing at what I need to do to improve. I struggle to find any positives in the comments and as a result, I wonder whether the task will ever be completed. Is anyone prepared to offer their encouragement?

Reference:

Gabriel, R. P. (2002). Writers' workshops and the work of making things: Patterns, Poetry ... Boston: Addison Wesley.