From KISS to Occam's Razor, we've all heard sage advice in some form or another about keeping things simple. It could be about the user experience of the software you are working with, it could be about your test approach. Sometimes though, there's more to simple than meets the eye.
Some times it means...
Don't disrupt the Status Quo
Testers have an innate drive to imbue quality into everything we touch. This can lead to some boat rocking. Most software has some darker corners that could use a little care and attention. Don't think testers limit themselves to testing software, we find bugs in processes, documentation, even just how people think about software.
So don't be surprised when sharing your testerly thoughts, that you encounter a response of we want to keep it simple, or some equivalence.
We aren't interested in problems found there
Maybe you are interested in doing some performance testing because based on some of your tests you feel like there is a bottleneck lurking somewhere. The response you get is, "Let's keep things simple for now, development is keeping performance in mind as they develop. We'll deal with issues if they arise."
There are cases when that might be a reasonable response. It can also mean, "Let's not go making problems where none currently exist..." It means we'd rather be reactionary, and scramble to duct tape a solution together when a problem hits.
We don't know any other way
Over time teams can become echo chambers that serve to reinforce existing practices and drown out any dissenting opinions.
Keeping it simple can just be a metaphor for we aren't open to change. Good, bad or indifferent it doesn't matter,there's the right way, the wrong way and the way We do things. There's usually some history/reasoning that has prompted a particular practice or ritual.
In the worst cases, the original causation for the practice may have dissipated, or been rendered irrelevant for some other reason but there's no one left on the team that actually knows the why things are the way they are. They just follow the ritual to avoid angering the software gods.
We don't understand the Risk
Portions of your software may be outside of the core strengths of your team. For testers it could be a lack of specific types of testing or domain knowledge, like security or performance testing skills. The same goes for developers, your team could be infected with NIHS (Not Invented Here Syndrome and not even know it.
It may seem more simple to stick with what you know, the rest seems pretty complicated or maybe even unnecessary. The problem is with (as Donald Rumsfeld say) the "unknown unknowns"
You can't test for what you don't really understand. You can end up just guessing and assuming.
We don't understand the Cost
If someone can't immediately quantify the cost of something it becomes difficult to get behind an idea to avoid the issue. Keeping it simple here may borderline on barely functional.
Due to time or a variety of other constraints you may be releasing brittle or non user-friendly software. You may skip a regression cycle because the work shouldn't have affected any other areas. The pressure to release software is immediately tangible, the effect is has on your users is a delayed force.
What can we do about it?
As testers, we can't just take things at face value. It's our job to translate the message, so we can truly evaluate the situation, and tactically plan subsequent actions. Depending on the context some or all of these translations are reasonable and valid. Sometimes, there is a method to the madness, and its our job to understand that before suggesting alternatives. In other cases we need to dig a little deeper, and through that understanding seek out a means to communicating effectively with our team to drive and enable higher quality software, process or teams.