What Skills #MakeATester – The Results

Last August I kicked off a little social media project on the back of my shock at the lack of content on University syllabus’ for Software Testing or QA. With this, I asked the increasingly awesome Testing community to list the skills/attributes that are required to make a good tester.

Life then kind of took over and my second son was born in December, so it has taken longer to collate all the responses from Twitter, this site and also responses on Post-It notes from the #AylTest event which i kicked it off at. After spending time merging categories and ordering them, there were 28 skills defined and over 400 votes. I can’t say I’m hugely surprised by the outcome but it does make for interesting reading (at least in my view 🙂 ).

So I won’t bore you with the full list, but let’s look at the top 5 skills/attributes which were voted for:

5th  – Coaching & Facilitating – 9% of votes

Being a good tester also requires the ability to mentor more junior testers, coaching them through any struggles they may have. Also from experience, it is usually QA/Testers who end up stepping forward and acting as Scrum Masters or facilitating project meetings and discussions, just because they feel more comfortable doing so.

It might also be a possibility that they may have to work with the developer to teach them good practices around unit testing or just trying to ensure their code is testable.

4th  – Ability to Continuously Learn – 13%

After 10 years in Software Engineering (last 7 in QA/Testing), I can honestly say that in the last few years, I have genuinely felt like I have learnt a new concept/technique/tool atleast every week, maybe even more frequently. This is largely due to the mass of amazing information and discussions that occur through social media or the testers slack channel or discussion forums through the Ministry of Testing or other sites.

Also, with the rapid change of technology, it always helps to be one step ahead and understand tools and techniques which will help you tackle the next app/web site/system that needs testing.

2nd = – Problem Solving & Analytical Thinking – 14%

Every piece of software is a new problem to solve as far as ensuring you have tested it enough to mitigate any risks and validate it is of a shippable level of quality. This of course then requires a degree of Analytical Thinking to understand how to overcome the problem. There isn’t one particular way to do this and every tester may have a slightly different way to tackle the problem

2nd = – Good Communication Skills (Written, Verbal and Listening) – 14%

Needing to be able to articulate well is a must have skill for all testers. Whether it be a defect report, a test case, test charter or even just discussing concerns with a co-worker, it is crucial that any communication is clear and concise to avoid any confusion or misunderstanding on the back of what was said/stated. It is therefore also crucial that a tester is able to listen to any response and be able to communicate further if needed.

1st – Curiosity & Asking Questions – 20%

If a tester isn’t curious, then they won’t ask the right questions, if they don’t ask the right questions, they won’t be able to test the software effectively. These questions could be asking the developer why things are working in a certain way, or it could be asking questions of the software during exploratory or test case/scenario identification. Without this ability, i would fear for the quality of the products being shipped.

Asking questions is clearly the most important skill when it comes to testing, and it starts at the conception of a project. From day 0, the tester can start raising questions and queries which will get other members of the team to think differently and look into ideas which could lead to a higher quality deliverable.

So what’s missing?

There have been a lot of debates over the last couple of years over whether coding skills are needed for testers. My view is even if you can’t code, you should atleast have the ability to read code and understand what is going on to be able to have a fighting chance at testing effectively. But my data shows that only 5% of the votes were for this skill so it would suggest that it may not be as high on peoples list of desired skills as i first thought.


What Now?

The next step for me is to find a way to look at showing anyone interested in testing roles that it isn’t necessarily about the technical skills you need, but more about making the most of the soft skills you may already possess. Being able to work through problems, communicating clearly and asking the pertinent question would be a huge asset to to QA team, possibly more than one individual who could automate all the testing.

I would love to reach out to students who are studying  a Computer Science degree course and show them that Software Testing is an option for them and maybe eventually even push the universities to start including content in their courses.

What Skills #MakeATester? – A collaborative approach to help future Testers get a chance.

So as my last post showed, Universities aren’t really considering Software Testing as a core topic of Computer Science degree courses. Therefore, by the time graduates are looking for jobs, testing isn’t really on their radar. The same can be said for potential testers who haven’t got degrees. As shown in a recent “Let’s Talk about Tests, Baby” podcast and the following survey, a fair amount of people fall into testing roles, some have qualifications in other areas sometimes not even related to IT atall and end up in software testing. This is not necessarily a bad thing, but surely there is a way in which people may want to be able to train themselves for that first Testing position, rather than only accidentally falling into the position? I know from my own experience, I learnt what QA did while still working as a developer and then transitioned across to become a QA Engineer when the opportunity arose. If I had known the areas to focus beforehand, maybe I would have been better prepared to decide the path to take at the beginning.

So, i don’t know whether this has been done before, but I intend to collate a list of ‘ideal skills’ to become a tester, these may be soft skills, more technical skills or any one particular ability which is useful in the role of testing. Using this amazing Testing community, we must be able to come up with a sizeable list of ideas. I will then cut down the list to the top 15-20 and hopefully use it to encourage students or other non-testing folk who may have the skills that Testing is a genuine option for them.

So how can we create this list?  Two options for you:

  1. Tweet #MakeATester to @siprior with your suggestions
  2. Fill in the form below

I can report back in a month or so on what my findings were


Software Testing at University – Is it an Option?

When I left university 10 years ago, I had no clue I would end up in QA/Testing. I didn’t even realise it was an option! Looking back on my degree course (and I really enjoyed my degree), it was very Programming heavy (C#, C++, Java and Prolog all taught) and testing was only taught as part of the Software Engineering module where maybe a week or two of lectures were given and there was a group project where one person was responsible for ‘testing’.

So, ten years on, have things changed? I’ve looked through the course content for the top UK universities for Computer Science to see how much content on Testing is available. Here are my findings:

  • The first University to have a module on ‘Software Testing’ was Swansea (ranked 11th)
  • Newcastle (ranked 14th) has a ‘Software Engineering Professional’ module in year 1 which covers ‘testing and debugging tools’ and a ‘Software Engineering’ module which covers the testing fundamentals
  • Surrey (ranked 8th) had a ‘Software Quality’ module in it’s ‘Software Development for Business’ degree
  • St Andrews, Oxford & Cambridge have modules on Model Checking which is not strictly testing but instead a pre-execution activity.
  • 6-7 of the top 10 have Software Engineering modules where Testing was a line item in the module description

So in a short answer, not a great deal of universities cover testing in any great detail. Swansea and Newcastle cover more than most.

I imagine a lot of students are leaving the top universities thinking Programming or Software Development are the main options for careers. How can it be changed? The software industry has evolved over the past few years to realise the important role testing plays in the software lifecycle. Can universities be made aware of this too?

I’m potentially going back to my university to give a careers talk in the near future and really want to paint this picture of the importance of testing. If students could see and feel the atmosphere at conferences such as Testbash or at Tester Gatherings up at down the country and feel the passion that so many testers feel for the job they do, it would open their eyes and make them realise the potential of other careers available to them.

Testing should be seen as an equal to development or at least a close second but a lot of these degrees are portraying it as a much smaller element of the SDLC.

I think maybe the issue with most/all Computer Science degree courses is that they are very Programming heavy, maybe they could teach slightly less of that and cover a lot more topics not just Testing.

If anyone has any suggestions for stuff I could share with Students about testing, if and when I get to give a careers talk, please comment here or tweet me @siprior.



Making Your Testing Effective

I’m sure most of us will have worked on projects where without having to do any particularly extensive test scenarios or having hardly dived into exploring the product, you find things that break that would be immediately obvious to the customer. This kind of project depresses me (even though it is fun to find defects), I like nothing more than a project where I struggle to find defects and all user scenarios work correctly, as this means (generally) that the code has been well thought out, it usually also means I have been able to work alongside the developer and ensure that quality has been built into the code and of course it may also mean that the customer has been listened to and the requirements have been developed against. As mentioned in a previous post, for me, testing starts as soon as I have started thinking about a project, so if we have got to a point where user scenarios are being met and minimal defects are being found, this would suggest I have done a reasonable job of asking the right questions and verifying that all the features work correctly through whatever method of test execution I have followed. But if I did have to raise a lot of defects, and the user scenarios didn’t work, does that mean my testing wasn’t effective? Of course it was, if it hadn’t been effective, I wouldn’t have found the issues and proved that the level of quality needed for release wasn’t there. Testing can be effective in both scenarios, it’s just about preparing your testing in the right way, so you can have confidence in what you are doing, regardless of the standard of the product under test.

So how would I define ‘effective’ testing? I would suggest these ideas:

  1. Know The Product – Read any documentation you can. Ask any questions which will help you understanding (however stupid they may seem, if you need to know the answer, then how stupid can it be?). Review code and unit tests to understand the flow of the system. Spend time exploring the paths through the product so that you are familiar with the expected routes from end-to-end. Having full confidence in the product will ensure you know what to test.
  2. Understand the Risk – Fathom out the areas of the product which pose the most risk to testing and ensure that these are prioritized before you start writing any scenarios. Understanding how to mitigate the risk for each of these will enable you to focus on testing the areas fully.
  3. Document Your Plan of Action – Now that you feel confident about the product, find a way which works for you to document your planned testing. This may be a mindmap, a word document or even scribbled on the back of a beer mat ;). What is important is that you have some way of being able to look at what you have thought to test and identify if there are gaps.
  4. Review with Peers and Stakeholders – Now maybe that beer mat wasn’t the best idea. It would be easy to hand your test plan over to someone else to look at, them to skim through it and hand it back to you without comment. Reviewing for me, would ideally mean walking through the test plan as a discussion with team members (dev & testers) and also with the people who provided the requirements. Talking through the plan with others will highlight how confident you are with what you are going to test, you may find additional scenarios to cover and also maybe get some suggestions on how to improve your approach.
  5. Be Prepared to Evolve Your Testing – There’s a good chance your original plan will not be exactly what you end up completing when executing scenarios. There will be additional options or configurations which may not have been obvious before. Don’t be afraid to divert from the plan, it should never be a hard and fast commitment, it should be seen more of a ‘Plan of Intent’.
  6. Review Results – When you have completed your test plan, go through the results again, ensure you understand the outcomes. Discuss with another team member who may highlight something that you hadn’t initially noticed.

No two projects may be the same, and there may not be one particular approach to Test Scenarios which can be used for all projects, but following these suggestions will hopefully ensure you have confidence in your testing and will give you a definite view of the quality of the product.

The ultimate goal is to ensure that your testing is doing it’s job, it’s finding defects if there are defects to find and it is proving the good quality of the product so that the team can have a distinct understanding of when it is good to release.

The Testing Mindset

For me, Testing is a mindset, not just a role that needs to be performed. Testing isn’t just part of the development cycle, it should be ingrained in every stage. Every aspect can be ‘tested’, whether that be requirements, architectural diagrams, code, unit tests, test scripts, user docs etc.

When Does Testing Start?

If I’m starting work on a project, I am starting to test from the moment I am assigned. There is a good reason for this, I feel that testing is more than just writing test plans, executing test cases, developing automation or even exploring the product in a time boxed exploratory session.

Testing isn’t just Test Cases

From the moment the first discussion about a new product or feature starts, I am testing, I am learning about the changes, I am understanding the features, I am thinking of pertinent questions which will both aid my understanding and assist development with design decisions and enhancing the testability of the code they will be producing. Yes, I will be documenting my thoughts and providing some form of test plan (maybe as a mindmap), I may be writing test cases, I will be involved in creating automation to verify test cases and raising defects, but these are just part of the overall role.

Testing is used to Improve Quality

Testing isn’t just about how something can be broken, it should be about how we can help to improve the quality of the delivered product, if that means having a discussion at the start of a cycle where you question the design and offer improvements to enhance quality, then you are finding a way to create better quality products, this is still a form of Testing. It is certainly more efficient and effective to invoke the change at the design phase than raise a bug and development having to fix an issue later. That’s not to say that when you ask those questions in the design meeting, you aren’t highlighting a possible test case that can be executed at a later point and it is true that test cases can be identified at any point, not just when you are writing a test plan.

When is Testing Finished?

A testers job is never done, there may always be more test cases to identify and more scenarios to run, but it’s about being confident that a high enough level of quality has been proven and the risk associated with the outstanding work is low. This will often be defined by criteria set by the team or by your own standards. That means, just the fact that all tests are passing is never enough to say testing is complete, there is always more that could be done. 

And with it being a mindset, the fact more can be done, will sometimes mean you probably could’ve signed off earlier than you did on the testing but you “just want to make sure”. 

In fact, I’ve never met a tester who signed off before they’ve put in a bit of extra work.

Testing Certifications – Are they worth the paper they are written on?

There has been a lot of buzz around this topic at recent testing events and in forum discussions between testing professionals, there are a fair few different certifications around but are they of value? A quick google and I found the following certifications:

  • ISTQB – Foundation, Advanced (Test Manager, Test Analyst and Technical Test Analyst) and Practitioner
  • Certified Software Tester (CSTE) – Varying levels and also offer a separate Certified Software Quality Analyst (CSQA)
  • Certified Agile Tester (CAT)
  • Certified Software Test Professional (CSTP)

There are many others and courses which don’t provide ‘Certifications’.

Now, let me state that I have the ISEB/ISTQB Foundation and Intermediate Certificates in Software testing, I attained them a long time ago and at the time they worked well as a base knowledge to get me going in the testing world. That is what I saw them as, a way to gain an understanding before applying the knowledge and diving deeper on my own into the different topics. I have done lots of reading and many free online courses around certain aspects which may have been briefly mentioned in the ISTQB courses.

Now here comes my rant… Doing the courses is one thing, for most of the courses, passing the exam means you have digested the definitions and content from the course and been able to answer most questions correctly in a multiple guess exercise. THIS DOES NOT MAKE YOU A GREAT TESTER! Stating you are a certified tester sends out the wrong message. Putting it in your profile name on linkedin – “John Smith – ISTQB Foundation Certified Software Tester” is wrong, it should not be something you are shouting from the rooftops. You should be saying something like the following:

I am an experienced Software Tester with advanced skills in x,y and z.

(note the lack of mention of the certification). They should not define you as a tester, you should be considered for roles on your skills, not on whether you attended a particular course and passed an exam. Companies also need to stop specifying certifications as part of their job specs, there are plenty of very good testers which may not have them who would do the role better than some of those who have.

There are plenty of courses worth doing out there that don’t give a Certified stamp at the end, ones that still have some form of assessments such as the BBST series (link here), these are courses I would like to get around to doing, but they are not 3 day courses with a multiple choice exam, they require continued effort for a period of weeks/months with practical assessments aswell as an exam.

Another course which comes highly recommended is the Rapid Software Testing course by James Bach, Michael Bolton and Cem Kaner, three industry gurus who give students confidence they can test anything in any timeframe (linky).

I guess the point I’m trying to make is, these ‘Certifications’ should be treated as any other training course, if you feel you will get value attending then go on them, just don’t hold the certifications up as a badge because it shouldn’t give you any additional kudos over other testers who haven’t got the certifications.

Testers are people and people learn in different ways. Testing is a field of work where there are constantly new things to learn, new skills to develop and new concepts to get your head around. Not everything you need to know will be in the syllabus of a certification course.

Like i said before, I have nothing against some of these courses, but they don’t make you the complete tester. Use them as stepping stones to further your knowledge and grow in the testing role.

Personally, I find now, that I learn just as much from reading other testers blogs or attending testing events and hearing new ideas. Learning opportunities can arise in many forms, all will be useful in making you a better tester.

So my answer is, certifications are only worth the paper they are written on, if they are then extended upon, knowledge is applied and not just used as a “that is everything I need to be a good tester” approach.

Spreading the Word – Being the Sociable Tester

For me, testing is a mindset rather than just a role and sometimes that can and does affect other aspects of life. The amount of times I have gone to try and break something intentionally, just to check that it can handle error cases. That would be fine, but doing it to the TV while my wife is trying to watch it may not be the best idea (just for the record, the TV didn’t break) or even worse than that, ‘testing’ one of my 9 month old sons interactive toys!  đŸ™‚ It’s a habit that is sometimes, difficult to avoid…

It is sometimes easy to forget that not everyone has the same attitude towards testing, I was recently asked by someone who is not technical atall:

“Why do you test?” What needs testing?”

I tried to suggest the usual examples:

“Would you be a passenger on a plane if you thought they hadn’t tested that it worked properly?”

“Would you put your child in a car seat if it hadn’t been safety tested?”

Then I suggested software is no different and that everything on a PC/Mac/Phone/Tablet SHOULD be tested in some form before it being deemed good enough to release to its intended audience.

Having this discussion got me thinking of ways I could help improve the attitude towards testing, especially from people who aren’t testers and also improve my own skillset at the same time.

The first place to start  for me was at work, with my colleagues, aside from doing my job to the best of my ability, I have also done the following:

  • I have printed out James Bach’s blog on ‘A Testers Commitments’ (http://www.satisfice.com/blog/archives/652) and put it up next to my desk.
  • I have my software testing books on show so anyone can come and read/borrow/discuss parts of them
  • Not being afraid to talk about testing and suggest ideas to developers on how to make their code more testable, hopefully raising the awareness that they need to think about this before they develop their code
  • I have started putting together mindmaps of how testing could improve projects that currently don’t have the resource
  • Attempted at starting an internal community where anyone who wants to discuss testing, has somewhere that they can share ideas with.

Then there is the external testing community:

  • Joined online communities such as the Software Testing Club
  • Attend conferences, there are plenty of these all year round, some are testing specific, some are software or even IT specific but it’s the people present that make the conferences.
  • read blogs (James Bach’s as mentioned earlier or look on ‘my favourite blogs’ in the menu bar at the top and listen to podcasts (“Testing in the Pub” or “Let’s Talk about Tests, Baby ” to name a couple that I have listened to recently)
  • Started a local tester gathering where people from all around the area can join and share ideas, and not limiting it to just testers but anyone who has an interest with testing (https://priorsworld.wordpress.com/aylesbury-tester-gathering)

Obviously, not everyone want’s to be sociable but I genuinely believe that my skillset and my people skills have improved no end since I started being more open to discussing/asking questions and sharing ideas and stories with other like minded people.

So what’s the next step with non-like minded people? How do we raise the profile of testing so people understand the importance of the job we do?  Some thoughts:

  • Holding some kind of event were non-testing people get chance to try and find problems in a buggy piece of software?
  • Getting out into schools and teaching testing alongside programming in the new curriculum?

Any other thoughts? Would love to hear some ideas. 
Next stop… The world! 😜

Not Familiar with Testers? – Proving Your Worth With a Development Team

Last June, I had the opportunity of a new challenge within my current company, which I grasped with both hands. I moved from a team which had a very well-oiled engineering process, a very stable test framework which gave the team confidence in their product and a team which I had worked in since I graduated from University 7 years earlier. The team I moved to had no active Testers and was still trying to define their engineering processes.

This has proved to be a challenging but enjoyable change and has really made me work hard to show what I can bring to the table and show why testing/QA teams are important.

One of the first actions when I joined the team was to ask to be added to code/peer reviews, previously the code had been reviewed between developers only. This brought some resistance initially –

  • “Why do you need to be on Code Reviews, you’re only QA”
  • “What benefit will it bring having you on the review?”
  • “It will take longer”

I went into more detail on this in a previous post, but the point was that they were not keen to start with and then the next stage was to bombard me with so many code reviews that I had very little time to do anything else. I stuck with it and eventually got through them.

There was initially a reluctance to involve QA, but to be fair to the team of developers, they were open to try once we started to discuss things with them.

Over the next 6 months or so, as a team we worked hard to prove ourselves and we are now at a point where QA are considered in design discussions, code reviews and any major decision making. It’s been a challenge but we are now showing signs of working as one team. There’s still a way to go but we are happy with the progress. But how did we get to this stage? I put it down to 3 things:

1. Getting the Right People – The team being put together has to have a solid set of skills across the team, a good mix of traditional testing skills and good technical developers to work on the frameworks. The testing mindsets need to be there and all of the team need to be strong enough to question things and follow through when something needs doing. It helps in the scenario of the developers being reluctant to work with testers, that the test team have the right people skills to get to know them socially or atleast be prepared to talk to the team about non-project/work topics to build up enough of a relationship that it becomes easy to discuss work topics with them.

2. Find Ways to Be Involved  – Asking questions, listening to conversations, being willing to take tasks which will involve working alongside a developer, are all things which will help aid understanding of the functionality. Know your stuff, if it needs looking up, spend some time reading around the subject so that you can have discussions with the developers about it.  Ultimately, it is about doing all you can so that the developers trust that you know what you’re doing and you will test the product effectively and verify the quality. Set up bug scrubs, or design discussions and invite development along, it’s things like this which will prove that you are all fighting for the same cause.

3. Find Issues Through Testing  –  It might sound obvious, but if the team are previously used to relying on Unit testing and their own dev testing, then the QA testing needs to enhance the coverage and find issues that their testing wouldn’t find. Whatever way the testing needs to be done, put together a framework which will enable the team to spend their time testing, rather than constantly having to fix issues and not be sure whether the issues found are due to the framework or the product under test. Then the next stage in proving worth, is to find issues which may not have otherwise be found, issues which would have caused major problems if released.

Having these 3 things, will give you a fighting chance of a testing team which will work well with development. Maybe we’ve been lucky with the people in our team, but the difference in the last 6-9 months in the attitude towards the testing team has been huge, but here’s hoping it will continue to improve.

TestBash 2015 – More than Just a Conference

I have to admit, I was really excited about attending TestBash in Brighton, it was my first conference for 18 months and there was a real buzz about this one on social media. The schedule looked really interesting and there was a 5 of us attending from work so it was kind of like a team outing.

The journey down to Brighton on the Thursday didn’t go without a hitch, the train from Victoria to Brighton got caught behind a broken down train which meant we got in 30 mins later than planned. By the time we had checked into the hotel and found somewhere to eat, it was too late to attend the Pre-Conference Social, which personally I was gutted as I had been speaking to several other testers on twitter in the weeks before hand and was looking forward to meeting them, the rest of the team seemed quite glad to be going back to the hotel to get some sleep and I can’t really blame them for that.

The following morning, myself and Jesus from our team got up at 6am and joined the Pre-Conference Run, we completed the 5km run along the promenade and were back at the hotel having breakfast by 7.30.

We arrived at the Brighton Dome and from the moment we walked it, TestBash had a different feel about it to other conferences I had attended, whether it was the ninja stickers we wore with our names on rather than the formal name badges at other events, or the Ministry of Testing t-shirts, but there was a real feel of community.

Here is the Intel Security team who attended the conference

There were lots of great talks during the day with lots of interesting concepts:

  • It was interesting to discover the difference between the testing and release process of IOS and Android apps.
  • I was fascinated by Martin Hynie’s story of how changing the name of the Test team to Tech Research then to Business Analysts then back to Test caused the company to treat the same group of individuals differently and really show the power of Job titles.
  • Vernon Richards gave an amusing look into some of the phrases that are thrown around about testing such as “Anyone can test” or questioning why testing didn’t find the one bug that caused problems in production. He also gave an example of how to deal with a product manager who wants a number for how long testing will take and doesn’t get the answer he wants.
  • Maaret Pyhajarvi’s session really showed that Quality isn’t the responsibility of just the testers, in fact Maaret went as far to say that Quality is built by the developers, testers just inform of the quality. This came from her account of working as a solitary tester on a team of developers and seeing that initially the quality went down with addition of a tester as the developers became less vigilant with their testing before handing it over, as they expected Maaret to pick up all the testing. She showed us how she managed to get them on board and as a team improve the quality.
  • Iain McCowatt discussed how some people have the intuition and tacit knowledge to see bugs whereas others have to work methodically to find bugs, he then went into ways to harness the diversity amongst a test team.
  • The concept that stuck with Matthew Heusser’s talk on getting rid of release testing was the fact that changing the process shouldn’t be done all at once and the best way is to try one or two new stages first and make gradual changes to the process. (I also liked the fact that he worked on his slides on his tablet as he presented)
  • Karen Johnson gave a very thought provoking talk on how to ask questions, this really resonated with me and I can certainly see ways to get more out of people when I’m asking questions

There were 3 talks which really stood out for me.

The Rapid Software Testing Guide to What You Meant To Say – Michael Bolton

I had interacted with Michael a few years previously when he had given me some constructive criticism on one of my earlier blog posts on this site, so I was intrigued to see what he was presenting. This was a very interesting talk, Michael is a very engaging speaker and it’s clear why he is one of the most respected members of the testing community.

The concept of this session was to remind us of some of the phrases which are commonly used by testers which can cause misunderstanding or misconception. He showed some examples where he exchanged words like testing for “all of development” in phrases such as “Why is testing taking so long?” and “Can’t we just automate testing”. Suggesting that people may use testing as a scapegoat in this particular example, when infact the whole process should take the blame.

Michael went on to talk about how safety language should be used, phrases such as “…yet” and “So far” and not making statements such as “It works” and instead stating “So far, the areas which I have tested appear to meet some requirements” or something like that….

The discussion of testing vs checking came up (which was part of the issue Michael had with my earlier blog post…. I’ve since done the necessary reading to know the difference) and showing how checking fits into the testing process.

Overall, I think I learnt that it can sometimes be very easy to make statements which may raise expectations more than they should be or give the wrong message completely. I will certainly be ensuring to use safety language more often in the future.

I also feel it would be really useful to go on the Rapid Software Testing course. Something to look into this year.

Why I Lost My Job as a Test Manager and What I Learnt as A Result – Stephen Janaway

I hadn’t heard Stephen present before and he came across very well. His talk covered how when he was working as a Test Manager, with the agile process, he was managing individuals in several different teams, while there was a development manager with each team. He talked of the difficulties in decision making and how the products were slow in being developed/released.

Stephen then described what happened next, test managers and Development managers being removed from their roles, a delivery manager being put in each team and how the process improved. The question then was what happened with the Test Manager? Stephen explained the roles that he was now involved in, such as coaching management on testing, and how to manage testers, setting up testing communities internally so that the testers still have like minded people to discuss testing issues with now that they haven’t got a test manager and generally being an advocate for testing/quality within the organisation.

It showed that Test Managers needed to be adaptable and make decisions to go along a slightly different path and this is the way the testing industry seems to be going, so it was interesting and reassuring that there are other options out there.

The other point that hit me during this presentation was that of the internal testing communities, we have lots of individual test teams working on different projects, all developing their own automation frameworks and using different tools, it would be good to bring everyone together to share ideas, and maybe get some external speakers of the testing community in, to inspire them.

I really enjoyed Stephen’s talk and it gave me plenty of food for thought about the future.

Automation in Testing – Richard Bradshaw

Richard’s talk resonated with me for one reason, he explained how in his early years he had been seen at the automation guy and would try and automate everything, then he realised that too much had been automated and a benefit was no longer being seen. I have seen for myself, how teams have been so focused on having all of their testing automated, that they actually spend more time fixing failing tests when the next build is completed that they do writing any other tests. I whole-heartedly agreed with Richard when he stated that automation should be used to assist with manual testing, (writing scripts for certain actions to speed up the process) rather than relying on automated tests for everything.

This does seem to be a hot topic for discussion as there is the question of automated regression tests/checks, and automated non-functional testing, how should these be approached? This presentation definitely gave me a lot to think about with how to improve how we use automation when getting back into the office.

Richard presented this really well and I would say it was my favourite talk of the day.

I have to say that the schedule from start to finish was enjoyable, the lunch was delicious, the organisation of the day was fantastic. I honestly can’t wait to go back next year.

I really felt like the testing community is a really great place to be, so many great people, great minds with interesting ideas and a great chance to improve yourself by being able to attend conferences like this.

I am really glad that I found http://softwaretestingclub.com and was able to find details of the conference on there. The next step for me is to help set up the internal testing community and get them to look at it too. Maybe we can have a bigger Intel Security contingent next year, maybe I will find something to present. 🙂

Overcoming the Resistance – QA Involvement in Peer Reviews

Maybe I was quite naive about peer reviews but my experience previously was that it was a natural process to have QA/testers involved in code reviews alongside developers. Whenever a new feature or a bug fix was implemented, before the code was checked in, the developer would set up a walk through with another developer and a member of the QA team. The team I was part of was quite a mature engineering team with a defined coding standard which was ingrained into the team. Never was a second thought given to the fact that development would commit code without showing it and talking through it with QA. This would provoke discussions around how to test the features and whether the development written unit tests had enough code coverage to QA’s satisfaction. Moving to a different team, I have seen that this isn’t necessarily part of the process, peer reviews happen between devlopers, it has never been considered to involve QA.

I have read a lot about this subject and actually, the level of maturity around code reviews of this first team is relatively rare in the industry. So, why is it so rare? I guess it depends on perspective and the level of testing being done:

  • If ‘black box’ testing is the main form of testing, then I guess not knowing about the code is the ‘right’ thing to do?
  • If ‘white box’ testing is used, then knowing the functionality you are testing is paramount, other than functional specifications, the best way of seeing how the area being tested works is to review the code.

All testing I have ever done has been a mixture of both of these, there has obviously been a level of testing which I can pull straight out the user guide/functional specification and others where I have needed to know intricate details of the code to be able to ensure I am testing all feasible code paths.

I have always found it beneficial to be involved in peer reviews, even if I don’t say anything during the review, but just soak in how the code works and write down ideas for testing. But usually, I will ask questions such as “what if i entered this here?”, “what if i did this?”, using the meeting to also force the developer to think about their implementation, rather than just plodding through code line-by-line.

So why is it so alien to some teams to involve QA? Here are some of my thoughts on some common phrases used:

  • QA don’t have the skills to review code – Not every QA resource will know the syntax of the particular language, but does that mean by sitting with developers and understanding how the code works, they won’t find issues or raise questions which provoke the developer to improve their code?
  • Having QA involved will delay build/release – Only if you treat QA as a separate entity to development and do separate peer reviews. If they are involved in same peer review, then it shouldn’t make much different. We are here to prove the quality of the work, not be a hindrance.
  • It’s only a one line change, why would QA need to review that? – On one hand, yes it is only one line but the context of that one line could have an impact on some existing testing, or just having QA aware of the change could be useful.

I’m not for a second suggesting that development teams may be dead against the idea, but I think as we move to an increasingly ‘agile’ world, separating Development and QA out at this level needs to change, we should be promoting a ‘One Team’ approach where value is provided by everyone involved. QA can bring value to code reviews. Quality should be built in at the start and the earlier this can be proved, the better it is going forward. It needs to be clear that performing a critique of the code, is not performing a critique of the developer.

Some quick wins may be needed to win the development team over:

  • Read up on the functionality before you attend the review, so you have a basic understanding of how it was described to be developed
  • Ask logical questions
  • Discuss options for testing
  • Flip it and have development review tests (share both activities)

Baby steps are needed for progress, find a way to get involved for some small tasks to start with and build up trust with the developers that you are not just there to be a pain in the backside but actually, if you work side by side with the development team, it will improve the product delivered.

Any thoughts on this topic would be most appreciated.