How small Edge Cases in fairness testing can destroy Player Trust
- Valerie Zabashta
- Feb 7
- 3 min read

Fairness in games that rely on numbers, probabilities, and random events is one of those things that players don’t notice – until something feels off. And when it does, trust is gone instantly. Doesn’t even have to be a real issue. Just a feeling that something is rigged, and suddenly, the game is "broken" in the player’s mind.
As a QA, I’ve seen how tiny, seemingly irrelevant bugs can set off this chain reaction. The kind of stuff that would normally be “meh, low-priority” but here? It’s the difference between a game people love and a game they never go back to. Here are some of my favourites.
1. The UI lag that makes people think the game is cheating
Imagine a game where you roll a dice, open a loot box, or spin some kind of wheel. The result is calculated instantly, but there’s a delay before the animation shows the outcome. Even if everything is working perfectly under the hood, that tiny lag makes players suspicious.
“I saw the result change! The game is deciding in real-time based on what benefits it, not me.”
Lag and animation stuttering are common issues, but I think they’re more forgivable when they occur on lower-end devices. Players using older phones or laptops might experience these problems, and while it’s not ideal, I would accept it as part of the trade-off for using older hardware. However, when lag or stutter occurs on higher-end devices, that’s when it starts to feel less excusable and more like a breakdown in the overall experience.
2. The “Too Many Coincidences” problem
Here’s a fun one: a player keeps getting the same near-win pattern over and over. Statistically possible? Yes. Feels believable? Not at all.
Players assume randomness works like a shuffled deck – distributing results “fairly” over time. They don’t expect streaks, even though true randomness causes streaks. So if the game unintentionally leans towards showing “almost winning” scenarios too often, it doesn’t matter if the math checks out. It looks rigged.
As QA, we can’t control probabilities, but we can check if the game displays outcomes in a way that doesn’t feel like it’s trying to manipulate players.
3. Rounding errors in rewards
This one is so small it’s almost funny – except when players get angry. A reward is technically correct but gets displayed as 0.01 lower due to rounding. That’s it. A fraction of a number that doesn’t affect the gameplay at all, but now people are convinced they’re being short-changed.
“I was supposed to get 100 points, but I only got 99. Where’s my missing point?”
This is why you test rewards in every currency, every game mode, and every possible edge case where numbers might get rounded. Because one decimal in the wrong place is all it takes for players to think they’ve uncovered a conspiracy.
How do we test fairness when we can’t control the randomness?
We’re not testing the engine itself – that could be an old and well-tested backend system set up ages ago. But we are testing everything that sits on top of it. So what do we check?
✔ Huge sample sets – run rounds, look for patterns that feel unnatural.
✔ Edge cases with different settings, numbers, and limits.
✔ UI sync – does the game actually show what’s happening under the hood, or is there a disconnect?
✔ Do the probabilities look reasonable to a human, or does the game unintentionally create too many suspicious-looking streaks?
The Bottom Line
Fairness in games isn’t always = math. If something feels off, it is off, no matter how correct the code is. As QA, we have to think beyond functionality. We need to think like players and spot those edge cases that shouldn’t be a problem – but absolutely will be.
Okay, off to test some edge cases now.
Comments