PEBCAK and Other Challenges: The Intricacies of QA in Software Testing
- Valerie Zabashta
- Aug 17, 2023
- 5 min read

In the world of software development, Quality Assurance (QA) is often perceived as the gatekeeper of bug reporting, where a diligent QA engineer stumbles upon a bug, swiftly reports it to the developers, and the issue is promptly resolved. However, this conventional perception doesn't capture the true essence of what QAing entails. In reality, a significant part of a QA engineer's role is not merely about reporting bugs, but rather about the meticulous process of tracking them down and analysing the underlying intricacies.
The nature of software is complex, and many anomalies may arise that don't necessarily constitute true bugs. Distinguishing between genuine bugs and expected behaviour requires a keen eye for detail. Sometimes, what seems like a bug might be a result of a specific configuration, edge case, or even user error. PEBCAK (Problem Exists Between Chair And Keyboard) is a very real thing, do not doubt that. A skilled QA engineer has the expertise to discern between these scenarios, saving valuable development time and resources that would otherwise be wasted on non-issues.
Let's uncover the challenges and intricacies that define the art of comprehensive QA issue tracking.
Bugs that should not be reported (those do exist!)
Single, isolated occurrences: Bugs that cannot be reproduced consistently or are difficult to replicate. Reproducibility is crucial for developers to diagnose and fix issues effectively. It's like dealing with mystical beings who leave no trace of their existence, keeping us on our toes and adding that extra sprinkle of magic to the world of quality assurance. Truly, every developer's dream come true!
I don't rush to summon the bug-reporting cavalry unless I have the exact steps to reproduce those pesky anomalies myself and have the logs from Android Studio/console errors from the Dev Tools/screenshots/screen recordings for the developers. My dedicated pursuit of reproducibility often takes me on an intricate journey, meticulously retracing my digital footsteps to recreate the hiccup not just once, but multiple times. Sure, it might sometimes feel like I'm embarking on a wild bug-chasing expedition, but that's the price I pay for ensuring top-notch quality.
Issues with unsupported environments: If the bug occurs in an unsupported configuration or environment, it is not necessary to report it. Using outdated and ancient devices is like trying to send a fax in a world of holograms – it's a Jurassic approach that belongs in a museum, not a testing lab!
Deviation from Design Documentation: Bugs caused by not following the specifications outlined in the design documentation. Design suggestions that help improve the functionality and UX – yes, personal preferences which lead to the design changes – no. If a design suggestion involves turning all buttons in the app into miniaturised cats that would meow whenever clicked, there is indeed some chance that the design team can kindly explain that as adorable as it may be, it is not a very good idea for the app about dogs.
Shortcuts/Cheats Leading to Undesirable Behaviour: Bugs that arise when users exploit shortcuts can lead to undesirable behaviour which would not match the behaviour of the product without using the shortcuts. This clever manoeuvring might seem like a time-saving hack, but it can lead to serious consequences – missed bugs, overlooked issues, and, ultimately, an inaccurate assessment of the game's performance. Moreover, shortcuts that circumvent certain game mechanics may create an incomplete testing environment, providing only a partial view of the game's performance. This can lead to gaps in test coverage and hinder the QA’s ability to identify and address potential issues that players may encounter.
Gods Know What Happened, and It Happened Only Once: Glitches or anomalies that occurred only once, seemingly defying explanation or reason. It's like the software decided to throw a spontaneous digital party and then went back to its regular, well-behaved self, leaving you in awe and wonder. Each time I encounter a glitch that makes me do a quadruple-take, I remember that in the magical realm of software, anything is possible.
Unraveling the intricate testing process behind a tiny feature
Behind every seemingly small feature lies a world of complexity that only a select few get to witness – the QA engineers and their behind-the-scenes magic. Users may encounter a new button or a subtle design change, thinking it's a minor addition to the software. However, beneath this unassuming facade, a hidden labyrinth of testing complexities awaits.
Test Case Example: App should launch on the Home Screen.
At first glance, the test case "App should launch on the Home Screen" may seem simple and straightforward, like a walk in the park. After all, we open apps every day without a second thought. The QA engineer's quest begins by considering all possible scenarios that could impact the app's launch. A few questions that should be answered before confirming the feature is implemented:
Does the app launch successfully with an active internet connection? With none? If the app requires an internet connection for specific Home Screen features, does it provide an appropriate offline experience?
How does the app behave when the internet connection is lost during launch?
Does the app launch correctly on different operating systems?
Does it function as expected when installed for the first time and when revisited subsequently?
What happens if the user interacts with the app (e.g., tapping buttons or scrolling) immediately after the launch?
What happens if the device orientation changes while the app is launching?
Does the app resume to the Home Screen when relaunched after being minimised or running in the background?
Does the app meet acceptable performance criteria during launch, even on low-end devices?
Are all UI elements on the Home Screen displayed correctly and consistently across different devices and orientations?
Is the layout optimised for various screen sizes?
Each question posed unlocks a Pandora's box of potential scenarios.
QA testing pitfalls: avoiding mistakes
Lack of Regress Testing: Not performing regression testing after an issue is reported as fixed. This can lead to the recurrence of previously resolved issues due to unintended side effects or incomplete fixes.
Regression testing? Nah, it's overrated! Let's just assume that once a bug is fixed, it's gone forever, like an ancient artifact locked away in a museum. After all, what could possibly go wrong with software that's been touched by the divine hands of developers? But on a serious note, skipping regression testing can lead to a graveyard of unresolved issues and we definitely don't want those ghostly bugs sneaking up on unsuspecting users.
Incomplete Verification: Relying solely on the developer's word without thoroughly verifying the fix on multiple devices. It is like accepting a weather forecast without looking out the window - not because we don't trust the developers, but because sometimes even the best meteorologists can't predict a sudden change in the weather.
Emulator Reliance: Only testing on emulators and not testing on real devices. Emulators might not fully replicate real device behaviour, leading to overlooking specific device-related issues.
Quality Assurance in the Kids Entertainment Industry
In the enchanting world of kids' entertainment, software often finds itself in the tiny hands of its youngest users, who possess a magical flair for treating it quite differently than their adult counterparts.
Unlike adults, who tend to follow conventional workflows, kids venture into boundless realms of experimentation, tirelessly trying out every feature in imaginative ways. They might tap, swipe, and pinch not only to click the button but also to see if the app responds in unexpected, yet fascinating, manners. This playful curiosity often leads them to uncover unforeseen bugs that might have remained hidden in the world of adult-centric testing.
Additionally, kids might exhibit unique patterns of interaction, such as shorter attention spans, rapid tapping, and the occasional random button mashing, which can put apps through a whirlwind of real-world scenarios. These endearing idiosyncrasies of their usage highlight the necessity for thorough testing that caters to the enchanting minds of young explorers, ensuring a seamless and captivating experience for our little digital adventurers.
Acknowledging that users might not follow the expected or predictable user flow and designing tests to simulate different interactions to uncover potential bugs and usability concerns is crucial. I joyfully welcome the challenge of simulating a plethora of diverse user actions, just like seasoned explorers venturing into uncharted territories. Who needs predictability when you can have an exhilarating quest for potential bugs and usability quirks?
Comments