testing the production code, not writing the production code. There is a stigma by some devs that testers are not as good as "real" devs. The ones that ship are the real heros. That's a load, as I am seeing, getting to work on the other side of the fence. Without a good test team, lots of obvious bugs can slip through the cracks.
This is one of the reasons I decided to try our SDET. I wanted to get first hand experience of what an SDET does as at a major company and how the test organization works to support the release of a product from a software giant.
It has been eye-opening. I write a lot of code is the first thing. I write a lot of test automation code to test the production software. This is cool with me since I am a programmer at heart and love to code. Finding ways to break the code is a fun challenge and let's me stretch my skills in new ways.
Besides finding bugs in our APIs with my automation code, I have to do manual testing of the software. As any techie is used to, finding bugs can be pretty easy, especially during the development cycle.
So, what makes a good report that does not get sent back from the development team looking for more information? This is something I have struggled with on the development side for years. Getting people to write bug reports that gave me enough information to reproduce (repro) or debug their crash.
Here is what I like to see and what does not get sent back from development:
- Build version:
- Of the software you are testing.
- Screenshot of the problem
- If it is a problem in the UI then a screenshot for sure.
- If you are seeing something weird with a command line test then a screenshot or copy paste of the text.
- What state was the software in before you started this test?
- What server were you talking to?
- Did you need to authenticate? What were the login credentials?
- Repro Steps:
- What steps did you follow to trigger this bug.
- If this was an automated test or a test of an API then include the code if it short and demonstrates the bug or link the test case here so the dev can quickly look at your code. Maybe you made an assumption with code that is wrong or maybe the API needs to be fixed to take this assumption into account.
- Expected Outcome:
- What was your expectation? Maybe your expectations are wrong.
- Actual Outcome:
- Ok, it did not work as expected. What did happen then? Maybe that was the expected outcome.
- Crash Report:
- If we are running an application then crash reports are common. Know where to get the ones for your platform and how to attach them to the bug.
- Collect logs from the app around the time of the bug. Don't see logs in the common logging location? Talk to the devs to find out if they are logging to a special file. Or maybe there are extra debug logs that you can turn on. Find out so you can help them.
That's what I look for, what I try to have in my report as a minimum, and what does not get sent back for more information (by me when working as a dev) and by the devs I am working with.
My goal as a tester is to open a bug with enough information that it is actionable by the developers and gives them the information they need to address the bug. I want to see a bug free product as much as anyone and if I can reduce the bug data collection loop so that devs can spend time fixing the bugs instead of asking for more information; then I did a good job.
Is there anything I missed in here that I should add?