This is the first of  a few guests posts from Version One.

Staying competitive in today’s economy means companies must deliver the right products to market faster and with higher quality than ever before. Adopting agile development methods is a significant way organizations are doing this. Making the switch to test-first programming and agile testing challenges traditional notions of software engineering best practices.


Myth 1: Everyone on an agile team tests therefore dedicated testing resources aren’t needed.

Fact: Teams interested in delivering a quality product realize that the same product knowledge, preparation, discipline and skills used in traditional QA should be leveraged when transitioning to agile.

Myth 2: Testers are second class citizens on an agile team.

Fact: Testers are valuable members of an agile team and often add a unique perspective in terms of identifying potential roadblocks and dependencies early in the process.


Myth 3: Important testing activities that aren’t product-related won’t get done.

Fact: Additional testing activities, such as performance and regression testing, are addressed through test-oriented stories, which are estimated, planned and tracked just like product-related stories.

Learn more about Test-First programming, and how to transition from traditional testing to agile testing.

{“Nothing can drag you down if you’re not holding onto it. LET GO – Anthony Robbins

We all have tendency to hang on to things, perhaps it is a way of working, or a behaviour, or a judgement of one of our co-workers… in order to move forward as a team, we may well have to let go of strongly held belief’s…what can you do as a Scrum Master to show the way, what can you let go of ?

What’s one thing you can support your team in letting go of ?

 {“The Hawthorne Effect. Loosely stated, it says that people perform better when they’re trying something new}


As scrum masters and project managers, One of our main goals, is to support the team to become the best team possible. Are the team doing the same old thing, day in day out ? Are you running your retrospectives in the same fashion at the end of every sprint?

This week’s challenge:

What’s one thing you can support your team in doing differently in the next sprint?


I have been reading a discussion on Jordan Bortz’s site, around the Lack of Scrum Success stories becoming a concern… in principal I agree with the title, there are far to few success stories…but that is not the failing of scrum or agile in my view.

I firmly believe that its always about people never about process…Scrum is a process in that regards, so that leaves people at the crux of the problem Jordan is writing about.

He is , in my view, missing out something, for the sake of this conversation, lets say there are 2 crude stages,

1. The adoption of scrum

2. The ongoing use of scrum

I would challenge him that he is actually referring to the lack of success of scrum adoption.

I have come across and heard many stories of companies wanting to buy scrum, to do scrum without being agile, its like wanting the results of diet, without the work by taking a magic pill…there are no magic pill’s.

You can put all the agile / scrum ceremonies (process) in place and not get the outcomes your after, your culture (people) needs to change also, and this is the challenging part.


Whats your experience ?

There is no getting away from it, nor should we want to, organizations want to be able to measure teams, their output, performance, yield, what ever you want to call it…

What is the goal here ? for me, the goal is not to measure agility, or to become agile, but to become predictable. If we become predictable at  delivering quality products, it’s a game changer for the organization.

So for me there are 5 key areas to measure: Predictability:


    1. Planned Commitment – is our commitment stable ?, the goal should not be about driving this up, but about stable predictable and sustainable commitment,  there is the law of diminishing returns at play here if we try and bleed it dry.
    2. Variability
      1. Delivered Commitment – are we doing what we say were going to do ?
      2. Ability to commit – where is the variability coming from in the system that prevents us from hitting our commitments, sometimes it is internal to the team, a lot of the time its external dependencies
      3. Ability to respond – how good are we at removing variability from the system to reduce variability
    3. Quality
      1. Defect count – how many defects are we letting through at the sprint & release level, how many do we miss
      2. Technical debt – what corners have we cut today that we will pay for later ?
    4. Customers
      1. Value Delivered at the Epic level – we should be able to derive a % trend of the value delivered by epic based on the number of stories that are truly delivered (word of caution here, value is very hard to predict, and there is growing part of the market who feel ROI is really ROI = one guess / another guess , and if we agree with that, then we must see that it’s a false metric to use, there is a move to cost of delay based tracking, i.e. what is the cost of me NOT doing this work, and there are 4 risk profiles….
        i.      Expedite – I am incurring a loss right now
        ii.     Deadline – I will incur a cost at a fixed date In the future
        iii.    Normal – I will incur an increasing incremental cost , – this may become an expedite over time
        iv.     Slow Burn – I do not know if  I will or will not incur a cost
      2. Value materialized – did the value we predicted actually materialize when we went to market
    5. Employees
      1. How happy are our staff ? – Things like Staff turnover have to be part of the full picture of the cost of a project. Its proven that happier staff are more productive. If we deliver one product but it costs us the teams happiness or the very team itself, is that a good price to pay ?

Along with this, there are some key questions we need to ask ourselves:

  1. Are we as an organization going to respond to the data and take the required steps to protect the status quo (if need be) or make the required changes ?
  2. Are we as an organization setup to be able to answer these questions ?
  3. Do we value the exercise of planning that is required to enable our teams to provide these stats ?
  4. Do we as an organization promise to not use this data to measure our people, the art of measurement should be a diagnostic tool, otherwise you risk people gaming the data and thus making the data useless for what it was intended and we end up shooting ourselves.


I recently attended a Limited WIP society meetup at Skills matter, where Karl Scotland and John Stevenson ran the Ball flow game as a way to introduce Kanban to people. The game originated as a way to introduce Scrum and I had the opportunity to run such a session this week, however I decided to tweak it a bit to enable various scenarios to be experienced by the participants and to support specific conversations.

Here is the setup: Continue Reading

Last Thursday I was speaking at the Agile Evangelist meet up at skills matter. It is a new group of people focused on all things agile in London. I ran a version of the “35 Game”, (see an earlier post for more details on that)

On the night, the question that we asked was, what are key factors for companies to consider when adopting agile. There was lots of challenges created in writing the question this way, some intentional, some not.

In game learnings:

One in particular that keeps on recurring, seems to be that people are afraid to ask questions when encourage to proceed, rather than say, hold on, you’re asking me to do something that I don’t understand, people keep their heads down and get on with it,  it takes a few rounds of the game before the confusion reaches a breaking point for someone and they “shout” to get clarity.

At the start of the game, people were asked to write 1 idea per card, however with the plurality of the words “factors” people wrote many. Takeaway here is people pay more attention to what they read then hear.

When asked to “point” up the cards (i.e. distribute 7 points across both cards) some people wrote 7 bullet points for the idea, rather than award the cards a score out of the 7 points available. Takeaway here is that not all words mean the same to different people.

Game Outputs:
34 points – Improve the existing workflow and measure progress
23 points – TDD – improved quality
21 points – Real customer feedback
19 points – Reliability (unit testing, user involvement)
19 points –  support work starting with imperfect information
18 points – flexibility around changes
18 points – reduce the distance to the product owner
17 points – get small – easier to manage, smaller requirements, smaller teams
14 points – greater visibility, with reduced upfront admin
14 points – decision making, understanding the process
12 points – move to a Flow system
12 points – Co location
12 points – early visible value

So what were the key learning’s ?
There really is no wrong way to run this game, there are so many learning points that come up, in fact, the output on the cards are almost secondary to the in game learning’s that take place.

The same talk will take place on the 31st March, why not come and join in ?

I came across the 35 technique for gathering input from a team on Lyssa Adkins blog, and have been wanting to try it for a while, and I ran it yesterday, with some changes, which worked brilliantly.


Used to quickly gather input or feedback from a team, found it especially useful for team with communication issues


Flip Chart(pre-written agenda, and timing) , Markers (ideally the same ones for each person, same color), Cards, Stopwatch

Continue Reading