A Software Engineer in Testing: How I Build Quality In

If you're familiar with my journey, you'll know I made the transition to a software engineer role around 6 months ago, after being a QA engineer in various forms for the best part of a decade. Part of my motivation for the pivot was that I genuinely believed I could build quality software much more effectively "from the inside" instead of trying to influence teams as a perceived outsider who isn't directly building the system. My theory was that I'd mostly be doing the same job as when I was a QA Engineer, but there'd be much less resistance to the way I work and my proposed changes.

My team focuses on building out the systems which provide the data to power our main customer offering. We're part of a much bigger whole, our domain being represented by a large diagram full of boxes with many squiggly arrows coming in and out. Many features we develop don't directly touch a traditional "frontend" meaning testing isn't always straightforward.

Our one dedicated tester was shared with another team, and now they're off on a full time mission to learn end-to end test automation, so we've had to take ownership of QA activities as part of our own workflows. As a QA engineer I picked up some good habits around quality and loved to help teams adopt these wherever possible. So, our team losing a dedicated tester didn't feel like an insurmountable challenge.

Sometimes, the way developers work can be rather opaque and this can lead to a duplication of effort as far as testing is concerned. One issue I commonly see is testers not having the awareness around what has been covered at unit/integration level, and what impact this could have on their test activities. With that in mind, I wanted to share how I work to add to that conversation. Here are the ways I test at each step of the software development lifecycle.

Refinement

We don't do scheduled planning or refinement meetings on our team, so I try to inject the techniques used in those meetings in an ad-hoc way wherever I can. Whether that's clarifying details on tickets or hopping onto a call with a colleague before I start the work, I try to gather as much information up front as I can before a single line of code is written. My usual goal is to understand WHY the ticket is being asked for, and what goal it is trying to achieve.

eg: "Create a scheduled job to delete all records in the {blah} table older than 10 days"

  • Why do we want to delete all of these records? Where did this request come from? Is it from customer feedback, from the business, from the team?
  • Do we delete all of the related data in other tables as well?
  • Are there similar scheduled jobs which might interfere? How do they achieve their goals? Is it something we can reuse?

Design

Once I understand the context of the work I've been asked to do, I'll create a list of the repositories/projects I'll be working in, and the changes I will make in each, building up from the requirements on the task. A list of questions will come out of this activity, which I'll seek to get answers to before I write any code. In my past life as a QA, I'd get this type of communication from a "Three Amigos" style meeting. Once the questions have been answered, I end up with a checklist of all the changes I need to make across the estate to achieve the goals of the ticket. This is my personal way of working, and it helps me keep track of everything I need to do.

An example of this looks like...

  • Repo 1: Implement scheduled job to request delete. Define validation at this level for missing schedule variable. Define how error codes from the delete request will be handled.
  • Repo 2: Add cron schedule as environment variable on different environments. Do different environments get shut down outside of working hours? Do they have less/more resources than others, which might lead to timeouts? Will this affect the schedule I choose for each environment?
  • Repo 3: Implement endpoint which performs batch delete. Define error handling on endpoint. Define validation for date input. Does "older than 10 days" include or exclude the 10th day?

Devising Test Scenarios

Since I have a written checklist of the changes being made to satisfy the goals of the task, this translates pretty easily to test scenarios. As I know how I'll be implementing the changes, I can map the test scenarios to their different levels easily. It looks something like this:

Unit test: Assert error thrown when invalid date passed in. Test dates on boundaries to avoid the dreaded off-by-one errors. Chuck all of your validation conditions at the wall here, it's cheap and fast.

Integration test: Correct data deleted from database when the endpoint is called, using a spread of data on the logical boundaries to visually demonstrate behaviour.

End-to-end test: Observe that the schedule runs on time and the endpoint is called, deleting the data. Any config mismatches or environment issues become apparent here.

Exploratory testing

Exploratory testing as a developer is so much fun. Pop a breakpoint in, spin up your project in debug mode and exercise your code with different values, stepping through to watch the flow of data through the different paths. It's a fun game as I let my imagination run wild and defensively program against my own craziness.

Testing is the process of deeply learning how my code works as I write it and it's the most fun part of the job.

The neuroticism I had as a QA hasn't gone away either, so I'll always triple check against my test scenarios before I put the work up for code review.

Communication & Updates

As a QA Engineer, you're constantly having to prove the value of your role, and so providing clear and timely updates of my work and its impact was a vital part of the job. As a software engineer, when I give updates at standup now, I try to add value by describing any problems I've ran into, as well as how I've tackled them. Sometimes, I'll leave small info dumps on our team Slack channel walking through all the silly things I have tried, inviting people to discuss. They follow a similar format to a bug report, detailing the issue and the reproduction steps, as well as any additional thoughts and theories. Even if there's no immediate replies, there's a searchable record for anyone having the same kind of issue in the future.

Monitoring

We use an observability tool called New Relic across our estate, which provides useful metrics and logging. This hooks into the application code so I can define custom errors and attach useful attributes to these errors, such as providing IDs from failed transactions, which can come in handy later on for debugging purposes.

Once a change has gone out, I can monitor New Relic for instances of those custom errors I defined, as well as see any potential stability or performance impacts of the change. Indeed, when the work referenced in this post was put into production, New Relic flagged several instances of the custom error I added, which allowed us to quickly investigate the issue. Turns out, even after all that pre-work, there was another scheduled job running which I wasn't aware of!

Sometimes changes can have unintended effects way downstream, so I will keep an eye on our Slack channels for people raising new issues and give them a read over in case they're related.

Conclusion

That theory I mentioned earlier about me doing mostly the same job as I did when I was a QA? It turned out to be true. The core fundamentals of my workflow - refinement, design, testing, communicating and monitoring - remain unchanged. My journey has shown me that quality engineering and software engineering are deeply intertwined, perhaps more so than we realise. This overlap raises an interesting question: Are we doing ourselves a disservice by treating them as distinct disciplines?

Whether you identify as a software engineer, a quality engineer, or proudly somewhere in between like me, I invite you to think about your own processes. How can you embed quality practices into your workflows in a way that fits seamlessly with your team's processes? How can you collaborate more effectively across roles to build better software together?