Change Artistry Poem

Change, oh no not again,
A volcano melts into a plain,
Change remains the same.

A(n) (im)perfect plan,
Results in blame,
Change, oh no not again.

We enter the theater of change,
Always knowing that,
Change remains the same.

We turn the sets and,
Raise our Change Artistry thunderbolts,
Change, oh no not again.

The change is poised to be sustained,
The chorus recite,
Change remains the same.

Change Artistry,
Helped us joyfully play this game,
Change, yes please
Change remains the same.

-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.–.-.-.-.-.-.-.-.-

By Jean, Stuart, Ronald
Albuquerque, New Mexico
Change Artist Workshop

Advertisements

Quality vs. Speed Simulation; a Project Management Retrospective

When asked the pressing question: “How does a project get to be a year late?” Frederick P. Brooks, Jr. (The Mythical Man-Month, 1975) gave the sobering answer, “One day at a time”. More often than not, projects seem to have to deal with small planning setbacks on an everyday basis. A conference hall has been double-booked, your programmer has suffered more than was expected from the after-effects of a visit to the dentist, the server you ordered has accidentally been left behind in the mail room, a seemingly simple hotfix hasn’t worked. The result is that projects often find there are not enough man-days in a month. According to Frederick Brooks, one of the major reasons for this is that schedules are drawn up under the assumption that everything will go according to plan. But isn’t it so that projects by definition do not run as planned? (Frederick Brooks doesn’t put it in these exact words, but he does give a guarantee of sorts that projects will not go as planned.)

Bearing this wisdom in mind, I put to myself the question of what unscheduled incidents we had encountered in our Quality vs. Speed Simulation (originally thought up by Gerald M. Weinberg). I decided to make a list. The list was meant to help me in drawing up schedules in the future.

The simulation was focused on the quality vs. speed (time2market) dilemma. Small teams competed against each other for the highest possible product score. The product score was determined by the number of bugs that remained in the product and by the remaining budget. The teams did not know the quality of the product in advance. The difficult questions the teams faced included: To what extent and how often do we want to assess the quality of the product (testing)? What price do we want to pay to raise the quality of the product (fixing)? With what (estimated) quality should we launch the product on the market?

Several unscheduled incidents

1. The team consisting of myself, Tom Breur and Ronald Damhof knew one another but had never facilitated this simulation together. A try-out was needed. The combination of full diaries, available space and the availability of the participants made this an impossible mission in the short term. Ronald Damhof had to spend several weeks in the US but luckily Tom Breur’s son Tim was able to come to the rescue.

2. The simulation consisted of an element that required two marbles of different colours. We didn’t have those resources. For us, the days of playing marbles in the playground lay in the dim, distant past and, unfortunately, that also applied to the ranges on offer in toy shops. Online shopping seemed to be the solution but sadly the marbles we ordered never arrived in the Netherlands. Buying in a considerable number of bags of M&M’s resolved the problem.

3. Another major element in the simulation was (Monopoly) money. We had invited quite a lot of people and they ALL turned out to be present! That meant we didn’t have enough money. Who could ever have assumed that everyone who had promised to come, would actually turn up? The consequence was that we quite literally had only had two minutes to print off extra money.

4. The idea was for Tom and Tim to direct the simulation, and that I would try and learn as much as possible as an observer. A well-thought-through plan. Reality proved less compliant. Tom and Tim were running their legs off and I wasn’t able to be with all three groups at once, which meant I couldn’t properly observe the dynamic of each group. We’ll do that differently next time.

5. The try-out took place in a time box of two hours. The most common complaint immediately after the end was: “We want to play the game again!” Maybe so, but there was no more time. Gerald Weinberg’s statement “quality is value to some person at some time that matters” (Quality Software Management, 1992) ran through our minds. Fortunately, in the days that followed we received lots of positive feedback from the participants. If you want to obtain different (read: better) results, you definitely shouldn’t keep doing the same thing. The lessons learned from the try-out were written down. We made some adjustments and concluded, with satisfaction, that we had become much wiser.

6. December 2012, it was winter in the Netherlands. The cold and snow had the country in their grip. Indoors, the heating was on high in the simulation room where the (three-hour long) morning session had started. The simulation was going well and we had not yet been confronted by the relationship between a high room temperature and chocolate M&M’s. However, once the afternoon session was underway, we soon became familiar with the natural phenomenon of melting. A considerable number of M&M’s got broken, and soon, alongside our red and yellow M&M’s, we were also using orange ones in the random samples. Fortunately, everyone was able to see the funny side.

7. We added an extra game element to expand the complexity of the simulation. In the middle of the room we set up a table with pens, paper and calculators. These aids were literally there for the taking and, to our thinking, were very useful. No one, and I mean not one single person, used them.

Conclusions

Naturally, this is not a complete list of everything that ‘happened’ to us, but at least it gives you a picture. Time to take stock.

Even a relatively simple project like facilitating a workshop involves a degree of complexity that inevitably leads to surprises. You can design projects to succeed or to fail. Don’t let yourself be surprised by “an accumulation of setbacks”.

Test your plan early and frequently, and modify it if necessary. A pilot (try-out) or a Hudson Bay Start (Rothman, 2007) can provide you with fast and inexpensive insight into whether or not you will be successful. You don’t need to work in an ‘agile’ organisation to implement agile elements. It’s simply good sense to apply the agile principle of Fail Fast, Learn Fast.

Your ‘single point of failure’ (for example, M&M’s, an essential colleague) will fail during the project. Get rid of it.

And now it’s now time to reflect philosophically on the article. The Temple of Apollo at Delphi is said to have borne the inscription “Know Thyself”. Thank you BIPODIUM for giving me the opportunity to remind myself of lessons I have already learned!