TestBash 2015 – More than Just a Conference

I have to admit, I was really excited about attending TestBash in Brighton, it was my first conference for 18 months and there was a real buzz about this one on social media. The schedule looked really interesting and there was a 5 of us attending from work so it was kind of like a team outing.

The journey down to Brighton on the Thursday didn’t go without a hitch, the train from Victoria to Brighton got caught behind a broken down train which meant we got in 30 mins later than planned. By the time we had checked into the hotel and found somewhere to eat, it was too late to attend the Pre-Conference Social, which personally I was gutted as I had been speaking to several other testers on twitter in the weeks before hand and was looking forward to meeting them, the rest of the team seemed quite glad to be going back to the hotel to get some sleep and I can’t really blame them for that.

The following morning, myself and Jesus from our team got up at 6am and joined the Pre-Conference Run, we completed the 5km run along the promenade and were back at the hotel having breakfast by 7.30.

We arrived at the Brighton Dome and from the moment we walked it, TestBash had a different feel about it to other conferences I had attended, whether it was the ninja stickers we wore with our names on rather than the formal name badges at other events, or the Ministry of Testing t-shirts, but there was a real feel of community.

Here is the Intel Security team who attended the conference

There were lots of great talks during the day with lots of interesting concepts:

  • It was interesting to discover the difference between the testing and release process of IOS and Android apps.
  • I was fascinated by Martin Hynie’s story of how changing the name of the Test team to Tech Research then to Business Analysts then back to Test caused the company to treat the same group of individuals differently and really show the power of Job titles.
  • Vernon Richards gave an amusing look into some of the phrases that are thrown around about testing such as “Anyone can test” or questioning why testing didn’t find the one bug that caused problems in production. He also gave an example of how to deal with a product manager who wants a number for how long testing will take and doesn’t get the answer he wants.
  • Maaret Pyhajarvi’s session really showed that Quality isn’t the responsibility of just the testers, in fact Maaret went as far to say that Quality is built by the developers, testers just inform of the quality. This came from her account of working as a solitary tester on a team of developers and seeing that initially the quality went down with addition of a tester as the developers became less vigilant with their testing before handing it over, as they expected Maaret to pick up all the testing. She showed us how she managed to get them on board and as a team improve the quality.
  • Iain McCowatt discussed how some people have the intuition and tacit knowledge to see bugs whereas others have to work methodically to find bugs, he then went into ways to harness the diversity amongst a test team.
  • The concept that stuck with Matthew Heusser’s talk on getting rid of release testing was the fact that changing the process shouldn’t be done all at once and the best way is to try one or two new stages first and make gradual changes to the process. (I also liked the fact that he worked on his slides on his tablet as he presented)
  • Karen Johnson gave a very thought provoking talk on how to ask questions, this really resonated with me and I can certainly see ways to get more out of people when I’m asking questions

There were 3 talks which really stood out for me.

The Rapid Software Testing Guide to What You Meant To Say – Michael Bolton

I had interacted with Michael a few years previously when he had given me some constructive criticism on one of my earlier blog posts on this site, so I was intrigued to see what he was presenting. This was a very interesting talk, Michael is a very engaging speaker and it’s clear why he is one of the most respected members of the testing community.

The concept of this session was to remind us of some of the phrases which are commonly used by testers which can cause misunderstanding or misconception. He showed some examples where he exchanged words like testing for “all of development” in phrases such as “Why is testing taking so long?” and “Can’t we just automate testing”. Suggesting that people may use testing as a scapegoat in this particular example, when infact the whole process should take the blame.

Michael went on to talk about how safety language should be used, phrases such as “…yet” and “So far” and not making statements such as “It works” and instead stating “So far, the areas which I have tested appear to meet some requirements” or something like that….

The discussion of testing vs checking came up (which was part of the issue Michael had with my earlier blog post…. I’ve since done the necessary reading to know the difference) and showing how checking fits into the testing process.

Overall, I think I learnt that it can sometimes be very easy to make statements which may raise expectations more than they should be or give the wrong message completely. I will certainly be ensuring to use safety language more often in the future.

I also feel it would be really useful to go on the Rapid Software Testing course. Something to look into this year.

Why I Lost My Job as a Test Manager and What I Learnt as A Result – Stephen Janaway

I hadn’t heard Stephen present before and he came across very well. His talk covered how when he was working as a Test Manager, with the agile process, he was managing individuals in several different teams, while there was a development manager with each team. He talked of the difficulties in decision making and how the products were slow in being developed/released.

Stephen then described what happened next, test managers and Development managers being removed from their roles, a delivery manager being put in each team and how the process improved. The question then was what happened with the Test Manager? Stephen explained the roles that he was now involved in, such as coaching management on testing, and how to manage testers, setting up testing communities internally so that the testers still have like minded people to discuss testing issues with now that they haven’t got a test manager and generally being an advocate for testing/quality within the organisation.

It showed that Test Managers needed to be adaptable and make decisions to go along a slightly different path and this is the way the testing industry seems to be going, so it was interesting and reassuring that there are other options out there.

The other point that hit me during this presentation was that of the internal testing communities, we have lots of individual test teams working on different projects, all developing their own automation frameworks and using different tools, it would be good to bring everyone together to share ideas, and maybe get some external speakers of the testing community in, to inspire them.

I really enjoyed Stephen’s talk and it gave me plenty of food for thought about the future.

Automation in Testing – Richard Bradshaw

Richard’s talk resonated with me for one reason, he explained how in his early years he had been seen at the automation guy and would try and automate everything, then he realised that too much had been automated and a benefit was no longer being seen. I have seen for myself, how teams have been so focused on having all of their testing automated, that they actually spend more time fixing failing tests when the next build is completed that they do writing any other tests. I whole-heartedly agreed with Richard when he stated that automation should be used to assist with manual testing, (writing scripts for certain actions to speed up the process) rather than relying on automated tests for everything.

This does seem to be a hot topic for discussion as there is the question of automated regression tests/checks, and automated non-functional testing, how should these be approached? This presentation definitely gave me a lot to think about with how to improve how we use automation when getting back into the office.

Richard presented this really well and I would say it was my favourite talk of the day.

I have to say that the schedule from start to finish was enjoyable, the lunch was delicious, the organisation of the day was fantastic. I honestly can’t wait to go back next year.

I really felt like the testing community is a really great place to be, so many great people, great minds with interesting ideas and a great chance to improve yourself by being able to attend conferences like this.

I am really glad that I found http://softwaretestingclub.com and was able to find details of the conference on there. The next step for me is to help set up the internal testing community and get them to look at it too. Maybe we can have a bigger Intel Security contingent next year, maybe I will find something to present. 🙂

Overcoming the Resistance – QA Involvement in Peer Reviews

Maybe I was quite naive about peer reviews but my experience previously was that it was a natural process to have QA/testers involved in code reviews alongside developers. Whenever a new feature or a bug fix was implemented, before the code was checked in, the developer would set up a walk through with another developer and a member of the QA team. The team I was part of was quite a mature engineering team with a defined coding standard which was ingrained into the team. Never was a second thought given to the fact that development would commit code without showing it and talking through it with QA. This would provoke discussions around how to test the features and whether the development written unit tests had enough code coverage to QA’s satisfaction. Moving to a different team, I have seen that this isn’t necessarily part of the process, peer reviews happen between devlopers, it has never been considered to involve QA.

I have read a lot about this subject and actually, the level of maturity around code reviews of this first team is relatively rare in the industry. So, why is it so rare? I guess it depends on perspective and the level of testing being done:

  • If ‘black box’ testing is the main form of testing, then I guess not knowing about the code is the ‘right’ thing to do?
  • If ‘white box’ testing is used, then knowing the functionality you are testing is paramount, other than functional specifications, the best way of seeing how the area being tested works is to review the code.

All testing I have ever done has been a mixture of both of these, there has obviously been a level of testing which I can pull straight out the user guide/functional specification and others where I have needed to know intricate details of the code to be able to ensure I am testing all feasible code paths.

I have always found it beneficial to be involved in peer reviews, even if I don’t say anything during the review, but just soak in how the code works and write down ideas for testing. But usually, I will ask questions such as “what if i entered this here?”, “what if i did this?”, using the meeting to also force the developer to think about their implementation, rather than just plodding through code line-by-line.

So why is it so alien to some teams to involve QA? Here are some of my thoughts on some common phrases used:

  • QA don’t have the skills to review code – Not every QA resource will know the syntax of the particular language, but does that mean by sitting with developers and understanding how the code works, they won’t find issues or raise questions which provoke the developer to improve their code?
  • Having QA involved will delay build/release – Only if you treat QA as a separate entity to development and do separate peer reviews. If they are involved in same peer review, then it shouldn’t make much different. We are here to prove the quality of the work, not be a hindrance.
  • It’s only a one line change, why would QA need to review that? – On one hand, yes it is only one line but the context of that one line could have an impact on some existing testing, or just having QA aware of the change could be useful.

I’m not for a second suggesting that development teams may be dead against the idea, but I think as we move to an increasingly ‘agile’ world, separating Development and QA out at this level needs to change, we should be promoting a ‘One Team’ approach where value is provided by everyone involved. QA can bring value to code reviews. Quality should be built in at the start and the earlier this can be proved, the better it is going forward. It needs to be clear that performing a critique of the code, is not performing a critique of the developer.

Some quick wins may be needed to win the development team over:

  • Read up on the functionality before you attend the review, so you have a basic understanding of how it was described to be developed
  • Ask logical questions
  • Discuss options for testing
  • Flip it and have development review tests (share both activities)

Baby steps are needed for progress, find a way to get involved for some small tasks to start with and build up trust with the developers that you are not just there to be a pain in the backside but actually, if you work side by side with the development team, it will improve the product delivered.

Any thoughts on this topic would be most appreciated.

Lessons Learned from BCS SIGiST Summer 2013 Conference

Thought I could kill two ‘proverbial’ birds with one ‘proverbial’ stone here and document the latest conference I attended while also writing my first post on here in 2 years!

It was an early start, the team of us that were going met at the rail station and got the train into London Marylebone. A short walk to the venue and we were there ready to start. A quick cup of tea and a biccy, then into the lecture theatre for the opening keynote:

The Divine Comedy of Software Engineering by Farid Tejani (Ignitr Consulting)

Farid confidently presented a talk which outlined the changes that many industries have faced over the past decade, industries which have been ‘disrupted’ by the digital world and stating that “Instability is the new Stability“. Examples of Blockbuster Videos and AOL ‘Free Internet’ CDs were used as examples for how industries have been disrupted. He then said that Software Engineering could be the next industry to be ‘disrupted’ by the digital revolution. If I understand what Farid meant here, it was that unless we adapt and change with our surroundings, companies will be left behind as their function could become obsolete (much like Blockbuster Videos).

He then turned to Software Testing and discussed how we were going wrong as an industry, suggesting some common misconceptions about testing which need to be corrected such as the following:

Testing is a risk mitigation exercise
Testing is an efficient way of finding defects
Testing adds quality to software

He listed 9 of these in total and explained why he felt these were myths. Farid then covered the ever-contentious issue of using agile testing over waterfall/v-model and how he thought that all companies need to move to a more ‘agile’ model to avoid being disrupted in the future. Personally, I can see what Farid is suggesting here, however there will always be companies and industries which will continue to use non-agile methodologies purely because they have to or they are mandated to (such as Aviation industry). That doesn’t mean they can’t evolve the way they work within their confines though, there needs to be a better feedback loop at each stage of a project/product/programme.

Overall, it was an interesting talk and the obvious message that came out of this was to ensure that as Testing professionals, we keep ahead of the curve so to speak and ensure we are aware of changes that may affect our ‘Multi-Billion $ Industry

Then time for a tea break and another biscuit or two (maybe there were muffins at this point too :-))

Expanding our Testing Horizons by Mark Micallef (University of Malta)

Mark Micallef has experience in the industry having managed the BBC news testing team amongst other roles and has now moved to Academia to lecture on and research Software Testing. He brought his knowledge to SIGiST to talk about some new ideas/techniques.

Mark started the talk by saying that having attended many conferences, he has found that the industry seems to be continuously talking about the same topics – Agile, Automation and Test Management amongst others. He felt that as industry professionals, we should be striving to identify new and better ways to test and find ‘The Next Big Thing’ which was Web Automation back in 2007. It was suggested that this could happen by the industry working closer with Academics. There may be obvious differences with the way the two industries work but there would be benefit for both sides. Academics tend to be all about the understanding, they need to 100% know exactly what they intend to do. They will also look for perfect, complete and clean solutions. The differences to the Software Industry are that professionals will tend be pragmatic and develop a solution that works and ‘adds value’ rather than being perfect and complete. This obviously doesn’t mean that one is wrong, just they have different ways to work.

He then suggested 3 techniques which could be used to improve testing.

The first insight was “Catching Problems Early, Makes them Easier to Fix”

This was related to Static analysis tools but took it one stage further and suggested that most Static analysis tools will give too many alerts which may lead to cognitive overload and eventually this will lead to the tool or technique being abandoned. This brought us to the suggested concept of ‘Actionable Alert Identification Technique (AAIT)‘. This is the idea that you apply some criteria to the output of alerts such as areas of code you wish to focus on, certain junior developers code you wish to look at or just code submitted in the last week and then by applying this criteria, you will be given a ranking of the alerts in a prioritised order. This will obviously depend on some form of automation set up to perform this analysis but the outcome would be worthwhile.

The second insight was “We Should be Aware of the Effectiveness of Our Test Suites”

Mark started off suggesting that Code Coverage tools should be used to work out test coverage, but that it may be possible (and he showed this using simple examples) that you can achieve 100% line, decision and function coverage without actually testing anything useful, the example Mark showed here had a function meant to multiply two numbers together, the assert had that the answer if you passed 5 & 1 in, would be 5! Simple enough, but if you change the multiply sign (*) for divide (/), the answer if you pass these two numbers in (in the same order) would still be 5. Therefore effectively rendering this test as useless!! The suggested concept here was the idea of ‘Mutation Testing‘. Mutation testing is where you create multiple ‘mutants’ of your product by modifying code then running these ‘mutants’ through your tests, it is then a bad thing if all your tests pass as if nothing has changed. Ideally, even a small change in code should cause at least one test to fail. Once they have failed, you can identify the changes needed to improve the test cases.

This did then highlight some problems with Mutation Testing such as it is expensive to generate mutants and execute the tests. This sounded like a very interesting technique despite the problems and will certainly be something I intend to investigate further.

The third and final insight was “Testing will Never Provide Guarantees that Bugs do not Exist”

Mark talked about how currently testing is seen as a way of following every possible path in the code to improve the coverage. Mark suggested that an additional concept of ‘Runtime Testing, could help. This is where the test is exactly what the user may be doing right now, so this would work well for web pages or apps/programs where there is a lot of user interaction. This will require mathmatically working out the exact paths users may take when using the software/web site and this will not necessarily come naturally to everyone. There may also be performance overheads if you are trying to test and measure what the user is currently doing.

Mark then went on to recommend how working with research and Academia can have huge benefits and that we should look into some of the whitepapers available on different topics to give ideas.

This was a great presentation which did give some new ideas and new ways of identifying ideas by using research by academics.

Requirements Testing: Turning Compliance into Commercial Advantage by Mike Bartley (Test and Verification Solutions)

Following straight on from Mark’s presentation was this one on Requirements Testing. Mike started off by using the example of buying a laptop and how you have a set of things you want when you go to purchase a new one, then when you buy a new laptop, are you able to map the features of the laptop you buy back to the requirements you originally had? It was an interesting point, sometimes people have a pre-concieved idea of what they want and don’t always get exactly what they want.

Mike then went on to software requirements and explained that they are a lot more complex. He then said that poor and changing requirements have been the main cause of project failure for years. Which highlights the need to capture requirements and store them. Mike then asked the question “How do we make sure requirements are implemented and tested?” The obvious answer here would be to ensure that they are tracked and measured as the project carries on. Mike took this one step further and talked about mapping requirements to features and features to implementation and finally to tests. Setting up a uni-directional flow chart of these mappings would highlight ‘Test Orphans’ (tests which don’t map to any requirements, features or implementations) and ‘Test Holes’ (gaps where there are no tests mapping to requirements, features or implementations). This highlighted the importance of knowing the exact purpose of every test and I’m sure most projects which have gone through several releases will have orphaned tests that will not be deleted purely for sentimental reasons. These orphans obviously waste time and effort and of course the test holes will highlight a risk as a requirement will be missing a test.

Mike then went on to talk about Requirements Management and how there is a reasonable amount of tools which provide the ability to map requirements to features, designs, units and even to code but not necessarily to tests or at least not with the ability to show test status or results.

The presentation then went down the track of showing how SQL can be used to create bi-directional requirements mapping meaning we can relate test status back to requirements themselves. This sounds like a useful idea which can bring a good Return on Investment, although there would be an initial cost to build the database, and to add all requirements and test information in to the DB. But the potential for the business advantage is massive, all holes and orphans could be identified quickly along with the analysis of the risk and impact.

The potential of this idea is huge and again is something that will require some additional investigation.

It was then time for lunch and have to say the food was delicious. Moroccan Lamb with wedges and salad, followed by Eton Mess! We did a little bit of networking and then decided for half an hour, we would go and wander around Regents Park and get some fresh air.

Improving Quality Through Building a More Effective Scrum Team by Pete George (Pelican Associates)

With my interest and knowledge in the Scrum area, this talk was probably the one I most looked forward to before the conference. Pete George clearly knows his stuff and was able to keep everyone interested throughout his talk. He started off by talking about the “Marshmallow Challenge” (http://marshmallowchallenge.com/Instructions.html) and how it is a useful challenge for teams to try. He then showed a graph about how different industries had fared in the challenge, obviously engineers and architects came out on top as they were able to build the highest structure but the next highest was ‘Kinder Garten children’, this showed more about the techniques used by the other teams, which would generally be a case of spending 90% of the time building a structure out of spaghetti, then putting the marshmallow on the top with about a minute to go and watching it fall, then quickly putting a structure together in the last 30 seconds that holds the marshmallow. The difference with the kinder garten kids was that they would use a different strategy completely and just build a stable structure with the marshmallow, then continue to enhance (effectively doing agile without realising it!). This was an interesting exercise which may be worth trying with our team.

Pete then gave a brief succinct overview of the scrum process using the 4 Artifacts – 3 Roles – 4 Ceremonies description. He then talked about the fact that with the continuous inspect and adapt approach, there was quality control built into the scrum framework. But the fact that Pete addressed next was the crucial point, the team is only as good as the people in that team. Companies spend all the money they have on changing the process the teams work in but if the teams aren’t working well together then projects will still fail. The problems will be due to the fact that some people are set in their ways and refuse to change, these may be crucial people technically, but when it comes to teams, they don’t seem to fit. This is where it comes down to things such as team building (socialising as a team), but also looking at the Belbin Team Roles theory. The idea being that any good team will cover off all 9 roles shown below:

Belbin Team Roles

There are some interesting roles here, in my head, I’m already trying to identify who is which role within my own team.

It was a useful talk, obviously Pete had to aim the talk at people who didn’t know a lot about agile/scrum as well as ones who did and I thought he did a great job at covering both camps.

Mission Impossible: Effective Performance Evaluation as Part of a CI Approach by Mark Smith (Channel 4) and Andy Still (Intechnica)

This talk covered how Channel 4 took an approach to include performance testing within their Continuous Integration setup to get constant feedback on their performance. Andy Still from Intechnica started off the presentation talking about the CI approach. He then mentioned that the first thing to do when considering Performance testing for Continuous Integration is to ensure that Performance is considered as a first class citizen, it should be treated as important as any functional requirement. Performance requirements should be defined and documented alongside these functional requirements at the start of the project. Andy then said that it was important that the tests should have realistic pass/fail relevant to the stage of the project and the tests should be able to run without any human interaction/interpretation. The next point was an interesting point, Andy mentioned that Performance is a Linear problem not a binary one, meaning that the check in that broke the build, may not be the one which caused the failure, it may be that the last check-in ‘tipped the scale’ but the previous one may have pushed the scales close to the limit from being nowhere near the limit. Andy then started a small debate by asking whether the real challenges to successful implementation of performance tests within CI were down to process or tools. He presented both sides of the argument well and left us to decide. Personally, I was split as I can see that both Process and Tools provide issues that could prove large challenges for teams trying to set this up.

Andy then handed over to Mark who talked more about Channel 4’s particular needs for CI Performance testing. Mark mentioned that performance issues can sometimes be more challenging to fix that functional issues and that build failures and short feedback loops can stop these breakages making it into the codebase. He then discussed whether we would want pure CI performance testing, stating it may be disruptive at the start of the project, but on stable projects, it would provide early feedback on performance issues. He then talked about Channel 4’s particular choice of tools and they chose Jenkins for the build management, Jmeter for Load testing and webPageTest for front end instrumentation. Mark then went through how their particular system works.

Finally, Mark gave a brief description of what CI testing gives us which was very short feeback loops, the ability to fail builds based on multiple metrics and you would get performance trending data.

Another very interesting talk and possibly my favourite presentation of the day, very well presented and it was a topic which could be of use.

Time for the final tea-break before the closing keynote.

Keep Calm and Use TMMi by Clive Bates (Experimentus)

Clive started by introducing who Experimentus were and what they did, which was an IT services company who help optimise their clients approach to Software Quality Management. He then stated that testing needs to improve because software fails and we need to be efficient at stopping these failures. He talked about doing things the right way and getting rid of the barriers to quality work. The place to start was to recognise there is a desire to do better and gather evidence of the problem. Clive then mentioned the 7-point methodology used to help companies look at the process and make improvements called IMPROVE (Initiate, Measure, Prioritise & Plan, Define/Re-Define, Operate, Validate, Evolve). Clive then goes on to talk about Test Maturity Model Integration (TMMi). Which is a staged assessment model like CMMi, but focused at the testing process. There are 5 levels of the process from Level 1 – Initial to Level 5 – Optimising. From what Clive said, I understood that most companies would be working towards their Level 2/3 as these are the levels that require the most effort, levels 4 & 5 then offer icing on the cake so to speak (There is more to it than that but you get the idea).

Clive talked about why a client would use the TMMi model, stating it was the de-facto International Standard to measure test maturity and it is focused on moving organisations from defect detection to defect prevention.

He then talked about the features of an assessment and talked about the types of companies who had gone through the assessment so far. It was an interesting talk and was something different from the rest of the talks. I didn’t feel there was anything that I could personally take back from this, other than ask the powers-that-be about whether they were aware of this.

That was the final talk of a full enjoyable days presentations. I heard good things about the two workshops but unfortunately there weren’t any places left when I looked at going to them.

So things I will take away:

– Mutation Testing looks like a useful technique to try
– Actionable Alert Identification Technique is something which could help us cut the noise from Static Analysis
– Using SQL to map requirements to features/tests looks useful
– Belbin Team Roles Theory could be used to associate team members with roles
– Continuous Integration with Performance testing is a must!
– I would like to attend the Certified Agile Tester training. 🙂

Testing Effectively in an Agile Project

This post will give an overview of how to make a testing team work effectively alongside a development team during an agile sprint.

The first step to ensuring the team works collaboratively is to make sure that all functionality that will be developed within the sprint, is well explained in frequent discussions and that everyone understands how each area will work. While some form of functional documentation for each feature is being distributed and reviewed by all parties, discussions should be happening with all team members about how this feature could be tested. They could possibly hold some brainstorming sessions to put together all their thoughts, having the development team involved in these discussions will not be a hinderance. Asking questions to the development team about any uncertainties that arise in these discussions is imperative, there may be areas that they haven’t thought of either, there may also be areas of deliberatly left to allow the development team a bit of creative freedom and to develop how they feel best suits.

Questions may be asked of the development team all through the sprint cycle and therefore, they should make themselves available as often as feasibly possible (obviously, they have to be left to develop the features as well).

Where possible, it will be ideal to automate as much testing as the team can manage, this will possible cause a bit of pain to set up in the first place but once this system is set up, this will then mean that future cycles will simply be a case of running the scripts that have been configured, rather than manually running every test. Writing an automated system may need to be planned out like a functional feature in the software. There are a few key points to take into account when automating a system:

Identify High Value Tests – Important to identify the test cases which will provide the largest return on their time investment. It shouldn’t be an aim to gain 100% automation coverage as it isn’t practical or cost-effective and would be almost impossible to achieve!

Automate What is Stable – Communicate with developers to make sure automated tests aren’t written for areas of code which are still in a volatile state, tests for stable areas will aid the teams effectiveness and avoid re-work.

Automated Tests Can be Run at Any Time – A large benefit of automated tests is that they can be scheduled for whenever they are needed.

Automation Helps Improve Software Quality – Automated tests generally run faster than a human tester, but it is important that the biggest benefit of automation will be seen in the next release of the product. To see the benefit of automation earlier, it is important to automate long repetitive tests to free up testers for other tasks.

Another method to aid productivity is Test Driven Development (TDD), I won’t go into the full details of this here but suffice to say, it is a practice where the developers write just enough code to pass a failing test. There are a series of simple steps to this:

-Developer works with the tester to write a test
-Developer writes code which make the test pass
-Developer refactors code to evolve it into better designed code

This process can benefit the testers for the following reasons:

Documentation and Working Examples of Code – by writing unit tests, the developers are providing the QA team with working samples of code. Meaning they can gain a stronger understanding of how the system works which will improve the quality of any functional tests then written by the testers.

Improves Code Quality – by having the unit tests written first, the quality of code will be stronger due to it being written to pass the tests. This will make the testers job easier later on in the cycle.

Team Works Together and Collaborates – there will be believers and non-believers or agile on all teams. To make this process work, it may be a good idea to pair up a believer and a non-believer to help spread the practice. If and when a team is all on board with TDD, the benefits for the product will soon become apparent.

If the team can work effectively and testers are able to set up an agile and automated environment, it will eventually lead to the testers being able to try some other methods to find defects such as exploratory testing!

Lessons Learned from Agile Cambridge 2010 Day 2

Following another 5.45 start, I managed to arrive at the venue for around 8.30am for the second day of the conference. I spent the first half hour or so before the second keynote speech talking to some of the guys from the Cambridge Crystallographic Data Centre who were attending the conference in their numbers.

Building Trust in Agile Teams – Rachel Davies

Having attended the workshop that Rachel ran the previous day, I was looking forward to this session. It was quite an open session with lots of interaction with the audience. Rachel talked about how trust is the foundation of teamwork and as good Agile process depends on teamwork, it is obviously an important factor. A statement that sticks in my head from this session was that lack of trust is like a tax on team interaction, showing that it will slow the team down and the costs will go up.

Rachel borrowed £20 from an audience member and also got a team to agree to do the ‘trust fall’ to portray how trust has to be gained by all involved to allow progress, especially with the trust fall, the team had to trust each other and also the ‘fallee’ has to trust that the team will catch them. Rachel then discussed different techniques to gain trust from a team both from a Scrum Master type role and from a team member, suggestions such as getting to know the rest of the team and to support the team by creating transparency for the team. The key lesson to take from this session was that building trust takes time and it will take a lot longer to regain it if it is lost.

I have to admit, I always thought that trust for teams was almost a given so this session definitely opened my eyes to the point that not everyone is necessarily as trusting as I am. I certainly feel that Rachels book ‘Agile Coaching’ will be worth buying at some point in the near future.

User Story Mapping & Dimensional Planning – Willem van den Ende & Marc Evers

After the half hour break and another cup of tea, I attended this workshop which was extremely well attended to the point that I turned up at the time the session started and the room was already full. The session kicked off with the guys descriping a better way to break down high level requirements into manageable user stories. This was done by first breaking down the system into the different users of the system, so their example was an online auction site, there would be the seller, the buyer and the site admin. Under each of these users would be a goal for the system, then under these would be organised activities that would be required to meet the user goal. Once these had been established, individual tasks to make up these features can be created and sub-tasks (tools) can be defined from then. As shown in the diagram below (courtesy of Jeff Patton and Karl Scotland)

In our team, we discussed setting up a similar mapping for developing a smart energy monitor for the home, which in itself is an interesting product, but was perhaps a bit too complex for this particular exercise as we kept coming up with features we felt were important. We did however narrow our options down to just the home user and the electricity supplier, we were then able to put a story board together.

Willem and Marc then discussed the next stage which was the dimensional planning, this is also shown in the diagram above and relates to selecting a number of releases from the tasks and sub-tasks that have been defined in the story board. They can be organised in a way that the minimum release can be defined and from that, subsequent releases can then also be planned. This is all done before any estimates have been considered. So the minimum release can then be planned and a timeline can be defined.

The guys also described a very interesting set of analogies to describe the releases. They used the terms ‘dirt-track road’, ‘cobbled road’ and ‘asphalt road’ which described the different qualities of the release. Dirt-track road would be a minimum quality release that will get the user from A to B. Cobbled road would be the quality level that most users would be happy with, it would do all that was needed but wouldn’t have any ‘bells’ and ‘whistles’. The asphalt road would be the ultimate version, a highly polished release.

All in all, this was a very insightful session where I felt I learned a lot on how to fathom out high level requirements in a very simple and straight forward way.

The Challenges of Measuring ‘Agility’ – Simon Cromarty and Simon Tutin (GE)

I had sat with these guys at the pub the night before so was intrigued as to what they would be discussing in this session. They had both been tight lipped the previous night on what exactly they would be talking about. They kicked it off by showing the journey that GE had come along as an agile company to get to the level they were at now. It showed that to get to a relatively stable agile ability, it does take several years.

Simon C then split us into groups and asked as to discuss what we thought ‘Agile’ meant, this was actually quite a thought provoking question as it was amazing how many differing opinions there were within the groups, obviously there were all the generic answers such as ‘adaptable teams’, ‘fail fast’, ‘short development iterations’ and ‘collaborative environments’ amongst others. There was quite a long list on the board by the time the Simons had gond around all the groups and collated them. Then Simon C asked what we thought made an agile team successful, this again provoked a lot of discussion within the groups and from what I remember our team came up with ideas such as ‘delivering on time/early’, ‘all the team being enthusiastic about the process and working together’  and ‘if the project is going to fail, the signs of a successful team is that they fail early’.

Simon C then discussed an assessment that is done with all the agile teams at GE, which is a 200 question paper where the team discusses together and ranks each answer between -3 and 4. -3 meaning that this is a systemic or oganisation impediment that the team can’t sort on their own, 3 meaning that they ‘always achieve this’ and 4 meaning ‘we have a better way’. This was a clever way of assessing the teams but also giving them the motivation to improve what needs to be improved and therefore forever evolving as an agile team. Ienjoyed this session and felt this assessment was definitely a way of showing how ‘agile’ a team could be.

In the break before the final session, I found myself stuck between purchasing 2 or 3 different books from the book stand, but in the end stumping for a book by Nat Pryce and Steve Freeman (Little would I know they were on the panel in the final session) called ‘Growing Object-Oriented Software, Guided by Test. A book all about Test-Driven Development which was one of the big themes from this conference.

Creating A Development Process For Your Team: What, How and Why – Giovanni Asproni, Nat Pryce, Steve Freeman, Rachel Davies, Allan Kelly & Willem van den Ende

This was a panel session where questions were fired at them from the audience, questions started with ‘How do you pursuade a company to fully buy into Agile’ which caused discussion around finding the right reasons for moving to agile and to move over slowly. The panel also stressed that Scrum is not really enough for a company on its own, they also need to use XP practices such as Pair Programming and TDD. There was then quite a lot of questions about Pair Programming and also people asking as to why Scrum is not necessarily enough. The panel described that scrum is just a project facilitation tool and that for the team to be completely agile, other methods must also be used.

It was an interesting session and I can’t remember even half the questions that were asked but I remember feeling like I had learned a lot from that particular session.

Mark Dalgarno then brought the conference to a close, there was then a quick flurry of goodbyes and the invevitable exchange of business cards between people and everyone then set of on their journey home.

It was a fantastic conference, it was great to meet like-minded professionals and exchange stories of our experience with the agile processes.

I must say thanks to Mark Dalgarno and team for putting on such a great conference and to Redgate for sponsoring the conference, being out in force to make us all feel slightly envious of their working environment and for providing lots of chocolate for the 2 days! Thanks to all the speakers and for everyone who I spoke to for being so welcoming and interesting!!

I hope I will be able to attend in 2011 and maybe even bring some colleagues along with me. 🙂

 

Lessons Learned from Agile Cambridge 2010 Day 1

Being my first conference for just over 4 years, I was unsure of what to expect. Luckily, the journey was not eventful in anyway and finding the venue was relatively easy. It was great to join other professionals who were working in environments with varied amounts of agile methods being used. I felt that all the sessions I attended over the two days provided me with great information that I felt I could take back to work and suggest as ways in which we could adapt our current methods.

Testing at Google – Dr James A Whittaker

The first session was the Keynote Speech from Dr James Whittaker who is a Director of Testing at Google – ‘How Google Test Software’, it was very insightful to see how Google manage to test their software when they perform several releases a day for many of their products and this was made even more suprising when he mentioned the dev heavy ratio when comparing the amount of developers to testers. A phrase from his speech that sticks in my head was ‘Testing is like heath care, it’s an on going process’. It was a very useful talk and if I could take anything away from this talk, it was that with the amount of testers they have compared to developers, it is important to put the ball back into the court of the developers to make sure they have tested their own code to an acceptable standard before the testing team see it. Also with the methods they have in place to ensure that any bugs raised are fixed as early as possible, certainly make it seem like a very productive environment.

The Specification Game – Gojko Adzic & David de Florinier

After a half hour break and talking to some other guys about their processes, this next session was one of the most eye-opening and intriguing of the conference. The workshop that Gojko and David presented to us was that we had to produce the first iteration of a blackjack game, we were given a description of the game from which we had to pull the features we felt would be enough to produce an end-to-end working game for the first iteration. This is where we made our first mistake, Gojko had mentioned he was the customer and not once did I as the Product Owner for my team feel that I needed to check something or negotiate requirements with him. I picked the requirements I felt fit for the team and the team started developing the prototype while the testers were pulled over for a few pointers with David, when they returned, we didn’t sync up as a team to make sure they understood exactly what the developers were developing but instead they ploughed into writing test scripts. The problem with not syncing up as a team earlier on, then hit us when we ran the test scripts and we failed the majority of them for simple reasons such as the wrong words on the UI buttons (Stick and Twist rather than Hit and Stand) and the cards being dealt two at a time for each user rather than one card alternatively.

We then recieved the User Acceptance tests from the customer (Gojko) and quickly found that we failed all of them. Gojko then mentioned that not one team had gone to him earlier to get the acceptance tests, meaning that none of us actually gave the customer what they needed.

All this in 90 minutes, it was a very interesting session that highlighted 3 major points to me:

  1. It’s important to interact with the customer to find out exactly their needs of the product and negotiate scope
  2. Make sure the team is reading from the same page for the entire length of the project and that testers are aware of exactly what the developers are coding
  3. Ask for the User Acceptance Tests as soon as possible, this way you can make sure your product covers the minimum of what the user wants.

A great session which was well led and left us all discussing it for the lunch break afterwards.

Creating Agile Environments – Rachel Davies and John McFayden

I was keen to attend this session as I was interested to see what would be defined as a good agile environment. The session was made up of two main group discussions where firstly people discussed their experience of workspaces from hell, followed by workspaces from heaven. The additional fun factor here was that we were asked in teams to create our worst and best environments using playdough and other creative materials.

There were some shocking stories for some of the worst environments but most had a common theme, such as division of teams with people working together but not being able to communicate very well with each other,  lack of communication with management, noisy or silent environments, lack of  motivation for employees and many others.

Obviously in a lot of cases, the good environments were an exact opposite – good communications, teams sitting in the same area so good collaboration, supportive management, incentives,  a happy environment. A few additional features were things such as modern technology both in machines and in communication tools, break out areas, windows with great views and project workspace.

The one thing I took away from this session was that for a team to work well together, the environment in which the team is located in is paramount to their success.

Winning Big with Specification by Example – Gojko Adzic

Before I attended this conference, several of our QA team attending the Iqnite conference in London and all came back to the office raving about this session. Having attended Gojkos workshop earlier in the day, I was keen to attend this session too.  The concept of Specification by Example was a new concept to me and listening to this presentation really opened my eyes to the idea of the collaboration between stakeholders, developers and testers to define high quality requirements that can be turned into a set of high quality user stories followed by a set of tests which would be the user acceptance tests, the difference being that the team will understand the need for these acceptance tests a lot more than if they were just given to the team by the users.

These then also turned into the living documentation which in itself is a very interesting concept, a document that will be continually up to date and will mean any change made to the software can be facilitated very easily. I can really see the benefits of using this approach and will be looking for Gojko’s new book when it is released.

The lesson shining through here again was to have the tests written before coding is started, then the team will know that the product is fit for purpose when it is complete.

Cyberdojo – Jon Jagger

I was looking forward to this session before the conference and when it came to the conference itself, I was so impressed with all the previous that I didn’t have time to even think about this before I turned up to it. The concept of this was reasonably simple, it was a pair programming exercise where every 5 minutes, whoever was ‘driving’ would move to another machine and become the ‘passenger’. The way it was set up was as follows:

  • Every machine had a unique animal avatar
  • The first users on each machine would choose a language (C, C++, Java, PHP, C#,Perl,Python)
  • They would also choose an exercise (ranging from generating prime numbers to anagrams)

When the users have selected these, they are then given a test script that fails for this exercise and dummy source files.  As soon as you press the play button for the first time, the current program will compile and the tests will be run, depending on the state of the program, a colour will be displayed (green if it works and passes, yellow if it doesnt compile and red if it fails tests). Once everyone had moved around a few times, Jon added an extra rule that every time he rang a bell, all teams had to get to green. This gave an interesting dimension to the coding as some people would try to get their code working as effectively as they can in 5 minutes and then when the bell would ring they would be in a state where it either wouldn’t compile or they failed the tests. Others would do minimal amount of coding just so it continued to pass. It was a very interesting exercise that showed how important it is to control your code and the lesson I learnt here which I found very useful was that if you have completed your exercise at the requred time, rather than moving onto the next exercise, in a collaborative environment, it would be more useful to help others get to the same stage. Certainly a valuable lesson to take back to an agile team working in short sprints.

That was the end of the formal sessions but there was the evening event sponsored by the kind people at Redgate Software. I sat and discussed agile process with some very knowledgable people and learnt quite a bit about how other companies are adopting the agile processes. The most important thing I learnt from this time was that not to expect agile to work perfectly straight away. It will take time to get it right.

For what happened on the second day of the conference, there will be another entry soon.