Quality assurance is a pain for everybody. There are countless ways a bug can avail itself, and sometimes they're not easy to find. There are a couple of reasons why it's generally a good idea to let someone with a fresh pair of eyes do some amount of bug reporting. In this post, I'll be discussing the value of QA, and how to make sure you're doing it in a productive manner. I'll also discuss a few possibly all-too-familiar QA stereotypes you will want to avoid falling into.
QA and Ensembling
Why is it so important that someone else evaluate that your code is performing up to specifications? The best answer to that is an analogy that comes from machine learning. When a model trains itself on a limited set of data, there's a chance for a phenomenon called overfitting. A model that isn't exposed to enough combinations of features will assume, somewhat naively, that the combinations of features that lead to certain classes imply correlations that don't actually exist, and will make predictions that don't actually fit an accurate hypothesis. That is, the model will optimize for the best accuracy over its training data. So if your training set isn't large or varied enough, you will generally have problems applying it to new data.
A developer can tend to overfit when implementing a new feature. Sometimes it's just getting too caught up in the groove of testing features on only one or two examples, and sometimes it's an unforeseen shortcoming of the original requirements. One way or the other, if a new feature creates an unintended bug, that's a problem, and it should be addressed before going into production. Having someone go from specifications to interaction with a feature helps prevent selection biases that may get in the way of spotting real bugs.
The solution to overfitting in machine learning is also a great analogy to QA. With multiple models -- created using either different algorithms or different data sets or both -- you can get varied opinions on the same set of features. If you bring all of those opinions together -- either categorically or probabilistically -- you generally get a more well-rounded and more accurate overall hypothesis to address the problem set. This is called ensembling, and it is used quite commonly in machine learning to address overfitting resulting from a paucity of data. It's a practical and isolated experiment to directly explain how the "Wisdom of the Crowds" works. This is exactly what bringing in another person to read the requirements and try a newly implemented feature is when it comes to developing software. Their different point of view may find something you don't. Their different interpretation of the requirements may raise questions you didn't have, or foresee possible ramifications that you didn't check for. This isn't a new idea; there's a reason why we're all familiar with the old adage two heads are better than one.
Some teams -- especially those looking at a continuous integration or continuous deployment scenario -- will find that it will be worth devoting money into hiring a QA specialist. If you're in a workplace where you're responsible for your own bug fixes, consider asking a coworker if they would be interested in mutually QAing each other's assignments. This would be in addition to peer code review, which you should already be doing.
Deconstructing the Critics
Whether you're a QA specialist or an engineer being assigned QA as part of your responsibilities, it's important to remember that you're critiquing the effectiveness of someone else's work. While it's an important component to increasing productivity and product quality, it's a difficult thing to separate emotions from. So it's important to be concise, diplomatic, and correct in your interpretations of the requirements. Here are some QA stereotypes that you don't want to fall into:
Any QA person can be a punisher if they're having a bad day. They may come across as rude by accident. Sometimes, being curt or harsh can be taken as a sign of authority. Remember that quality assurance isn't just for your sake, but often at the behest of the people who sign checks. Attitudes that exude nonchalant expertise can often impress those people and mean job security for the individual doing the QA. Try not to take it personally. Conversely, if you come across as a punisher, be prepared to be on people's bad side. You know you're one if you've ever responded to someone asking for more information about why a ticket was reopened with, "Well, did you even read the requirements?"
Tickets Look Like: "This looks ridiculous. Fix it." "We should dump this stupid, worthless feature (you stayed late all week implementing)."
Most Likely To: (Unintentionally?) Hurt your feelings.
Robots make you wonder why anyone is paying a person to evaluate your work. Besides, if you're using unit testing and acceptance testing right, you can generally get more information from a build task than the comments you'll get when a robot reopens a ticket.
Tickets Look Like: "Feature does not match requirements. Please see requirements."
Most Likely To: Reopen a ticket without any kind of a comment.
If you have an essayist in your midst, there's a good chance they also wrote the original requirements. If you're already making someone go back over their work and fix something, it's a good idea to be to the point. Essayists seem to be unable to pare down their comments to what's actually broken and how it should be acting. Of course, good luck trying to get them to explain what's wrong via IM -- anything written is an opportunity for verbosity. Essayists are often a symptom that the features being requested are unnecessarily convoluted.
Tickets Look Like: "The original requirements for the feature as documented require that in the context of page type A that all items to be enumerated be shown as-is with a limit set by the business requirement type of the object being viewed. Items being viewed without a specific business requirement type default to a three-step process to determine the fitness of their viewability. If it is deemed fitly viewable, show all items, if not, then if the item has at least three enumerable components, show only the top three. Otherwise, if only two, show one. If one, show none. For page type B, enumeration acts a little differently..."
Most Likely To: Reopen a ticket five minutes before a release is to go live with a comment that takes you seven minutes to decode.
It's generally not a confused person's fault that they reopened your ticket incorrectly, or reported a bug that was actually according to previous specifications. Sometimes they're interns, and sometimes they're just people who have been requisitioned to QA work while they have a little downtime, or as support behind a big release. One way or the other, they can drag down your metrics (if they're being tracked), or force you to do additional work no one was asking you to do in the first place. Oftentimes, these kinds of attitudes are just a sign of a lack of institutional knowledge from the person performing QA. That can be due to incipience, unavailability of necessary information within the workplace, the inability or refusal to thoroughly educate themselves on the product in its present state, or just plain being uninformed. A lot of the problems they create could generally be addressed by getting out of their seat and talking to someone who would know before opening or reopening a ticket.
Tickets Look Like: "Is this supposed to look like this? I'm not entirely sure. Thanks!"
Most Likely To: Be viewing the page in IE6 anyway.
Sidesteppers are sick of working with your issue tracking solution. They have work for you to do, but they don't feel like explaining it. If they have their way, product documentation may as well be non-existent. It's hard to push back against a sidestepper. They're right in that it's important to have direct communication. In truth, there is a need for both effective, concise ticketing as well as clear and helpful direct communication. But if the developer has to comment the minutes of your brief meeting to know what the ticket was opened or reopened for, you're not doing your end of the work.
Tickets Look Like: "The requirements have changed. Please see me." "This is a pretty complex feature. Let me know when we can talk about this in person and I'll explain everything then."
Most Likely To: Have opened two of the same ticket within days of each other, not realizing what the first one was for.
Sometimes it's great when a stakeholder takes an active interest in the development of a product. Other times, because of their position, if their interaction is limited, but they make a request in the middle of an active ticket, it can become a troublesome "stop the presses" situation that doesn't do much more than stress everybody out.
Tickets Look Like: "This dosent conform to [these brand new] complaince reqs [I just got off the phone about] -- plz fix"
Most Likely To: Keep you late for something that ends up getting scratched in a week.
Interlopers are often developers just like you. You may in fact be an interloper. It's not always a bad thing to be one. It's generally a symptom of excessively democratic work environments. It gets exacerbated when they don't have established and enforced policies regarding style and implementation. Interlopers are often correct in that something isn't working or isn't as efficient as it could be. An interloper may just be trying to improve the code base. However, in absence of clear coding policy, there's a chance they can go overboard and perform gruesome cosmetic surgery on already functioning components, leaving them unrecognizable.
What's the difference between an interloper and a proactive developer? One is improving the code base, and the other is changing it for their own benefit. For instance, you may be able to rewrite some core logic into a cryptic one-liner, but have you considered the trade-offs? It's useful to be able to read code without needing the comments to explain something. There's always a middle ground where you can make code more efficient while keeping it legible. When in doubt and in absence of a policy, always conform to the de facto style standards already visible in the file you're working on. There's a good chance those unnecessary refactoring efforts are really just a way for the interloper to entrench themselves into the code base. The more people turn to them to explain their work, the more it inflates their traction in the dev team. This means quick job security without actually earning it. You can't fire someone for bad code if the code works -- especially if they're the only person familiar enough to work with it. Instead, you're stuck with a productivity bottleneck as everyone turns to the interloper for help working with components they have touched.
Interlopers are extremely proud of their contributions, and want all of the "great" things they have done well-documented. Another thing an interloper can be guilty of, given this trait, is doing your work for you and (hopefully inadvertently) taking the credit for it. This ultimately sidesteps any real QA process, and prevents other developers from learning from their mistakes -- if there were any; chances are that the code may have been fine to begin with aside from its lack of intervention.
Tickets Look Like: "This doesn't work now that I've compressed these three separate methods on this object into one. Please fix." "Went ahead and fixed this. If you feed the method the string 'foo' it will act in a specified way... otherwise, it acts normal."
Most Likely To: Have caused the bug in the first place.
The Three C's
So what makes for good QA? The answer is really very simple. Just follow the Three C's of Quality Assurance (yes I did just make them up). When opening or reopening a ticket, make sure that you are:
A clear ticket comment is one that identifies the problem or requirement and explains concisely what should change to satisfy it. A requirements document generally identifies the state of how a feature should act; a ticket should represent a diff that shows how to go from the current state of the feature to its intended state. This is why just pointing to a set of requirements is a haphazard way to implement a new feature or explain a bug.
A comprehensive ticket comment is one that has taken the time to isolate the variables at play from the perspective of an end user, and not the code. If you're writing a comprehensive comment, then you've evaluated all possible uses of a feature and at least tried to interact with it in an example case for each. This helps everyone involved in the life cycle of a product to get something right the first time.
A constructive comment is one where the author is aware of the language being used to make sure that, even if it must point out that something wasn't done right the first time, that it is not being worded in a punitive manner. We're all adults, and don't need to unnecessarily pad a ticket with self-esteem boosting complements before we get to brass tacks. Generally, if a ticket is clear and comprehensive, there's very little room to be excessively critical. If you feel like the truth by itself is a little harsh, you can always mention the scenarios where the feature does work (again, being comprehensive) so it's not all bad news.
If you keep these maxims in mind when evaluating someone else's work, I think you'll be pleasantly surprised with how much more responsive the individuals you're ticketing will be. I've been on both sides, and I've seen myself influence and be influenced by following good commenting practices, and being generally available for further explanation. In the end, remember that QA is a form of peer review, and therefore needs to be approached with rigorous, structured consideration. You would want the same if you were on the other end.