What can we learn from the top breaches of 2020?

I’ve been catching up on my newsletters of all things tech this week and came across this article:

It got me thinking, what can we learn from the top breaches of 2020? What can we communicate to our team from these breaches to make plans for 2021 and beyond?


@skillinen, FYI!

We all need to keep security aspects in mind when we devise test strategies!


I think a lot of this will boil down to - do companies have the budget (and if they do, are they using it) for penetration testers? and other cyber security areas.

It’s a specialised skill-set within testing of it’s own right. There are some key things that testers and developers can use, such as the OWASP Top Ten, but at the end of the day… it’s not a replacement for a full security test.

From what I have experienced, the stance is typically “it won’t happen to us. People want to hack the likes of Facebook, Amazon, etc.” those big-brands where the data will be worth a lot of money.
Although, when I go on haveibeenpwned.com it’s not the biggest names on there!


That’s an interesting article, but I’d really like to see some writing that takes into account the vastly different levels of skill and complexity of the largest hacks. Supply chain attacks (such as SolarWinds) are uncommon and difficult, both to perpetrate and to defend against. Ransomware, while very impactful, is also a much less complex thing to handle. Many (most?) threats can be thwarted by the solid application of basic security techniques.

As testers, one of the most powerful things we can do to improve the security of our software is to ally ourselves with our information security and operations teams. QA isn’t a replacement, but is best placed to assist them in knowing what good behavior looks like, and improving detection of bad behavior. (Loosely defined as “anything that can impact the integrity of my system.”)

It’s important to know what your threats really are - as @froberts points out, plenty of companies think they aren’t targets. This is just not true.

There’s a whole conversation to be had about which threats are likely to impact a given company, or a given piece of software. One of the best tools I’ve seen to address that is threat modeling. That’s a big topic, probably too big for a fast thread reply like this one, but I highly recommend testers look into it - it’ll be quite similar to other forms of modeling an application to determine where tests should go!


Trying to guess, as I doubt anyone understand the context of each company and how they came out to have those problems…

  • Things grow complex, complexity is harder to manage?
  • The bigger the company, the more reckless they are with some things?
  • Things happen and money spent on avoiding or dealing with those could have been more than the impact.
  • Even if breaches happen, people will forget/ignore and continue to use/buy those products;
  • No software product is safe; If someone wants to attack your product they will find a way sooner or later; Remember that some attacks take hours, others take years…
  • A department of a company can do a good job, but other departments can mess up many things;
  • Security testing is done mostly with tools and scripted scenarios/steps. Tools are not going to search on their own for deep, complex, specific product problems or risks.
  • The number of software products increases, so does the chances of some getting hacked;

Working for one of those companies listed, i can say it has rightly caused a shake up as far as the importance of security. Pushing for better secure coding practices and security testing is now an easier discussion and we are pushing to include the OWASP top 10 in our CI pipelines to ensure the basics are done.

Still a long way to go as far as educating the business on the importance.


I’m always blown away by the variety of cyber attacks. The sophistication used in some is properly mind blowing.
On the other hand some things that are so basic, that a minor preventative action would have prevented it.

Goes to show no defence is unbreakable, and even small actions make a difference.


This is very true, but with caveats.

I’ve seen so called Security Testing/Penetration testing experts just execute some testing using BurpSuite (or similar) and then produce a report based on that, but then not take that test output any further.

These tools are great at highlighting potential problems, and will even help discover things like injection flaws and other vulnerabilities. However, a lot of these tools, OWASP ZAP included, do reveal a lot of false positives. That being an alert in their report that is actually not a bug, but some other aspect of the application. I had both ZAP and Burp mistake a long string of numbers in a URL (it being a GUID) as a potential credit card number exposure, just because the guid started with a particular string of numbers.

One thing some of the tools and scanners will do, is to allow a tester (of any type) to take a potential flaw that a scan or a spider might have uncovered, and either replay the HTTP request, or modify the data in some way to perhaps uncover more information about the potential problem. They often have built in intercepting proxies, so you can identify using breakpoints etc which request is potentially at risk from exploitation.

You can also tune the tools to explore software at different levels of depth and accuracy. Or apply a fuzzing library over an endpoint, data input, or some other application element.

These tools are of course not without their dangers, so they must be used with caution, and not on a system that is shared.

But they do enable testers on teams to be able to leverage and grow their security knowledge for the benefit of the team. It just takes some time to learn about the feedback you get from the tool, and how to tune it to suit your needs best.


This is a really important thing to note. Like you’ve said, the tools are great at highlighting problems. But if you don’t have the knowledge/experience to interpret what it is telling you, then it is a bit redundant.


I am so lucky, the company I work at with does annual external audits. We also have an internal dedicated security resource. But I am trying hard to up my security game. Was just watching some a bit of Testbash Home 2020 because I missed it and reminded of how any kind of testing has to be done often to have real impact. Testing and exploring security once in a while does not cut it, so I’m learning a lot.

Just being the “evil user” with insider knowledge, is the role I’m aiming for.


Drawing on @ipstefan and @skillinen’s comments; we’ve been looking at ways to improve our product’s security. Now, although our product is a specialised piece of admin software, it’s run across large institutional networks, and there is the risk that an entire network could be compromised if our product has a security vulnerability. Our product also potentially holds a lot of individuals’ personal information and so needs to be secure to meet legal privacy requirements. Even if our app is firewalled from more sensitive systems, bad actors could potentially find a gold mine of information with which to fake the identities of real users. And some of our clients have been targeted in major hacking attacks because of some of the work they do.

The lesson is that no-one can afford to be complacent about security, because each supplier has a responsibility to their client and to other product suppliers.


I haven’t followed most of these too closely, but the SolarWinds one bubbled up a bit yesterday due to the NYTime’s piece that TeamCity may have had a role in the SolarWinds hack making the rounds (we use a self-hosted version of TeamCity). Really highlights the Swiss cheese model and how you often need to have multiple failures along the way to get to the end result.


I’m really glad the swiss cheese model is becoming more popular. It’s a useful way of communicating about the complexity of actually preventing problems. (Or, if you’re inclined to the simulated attack side of things, the complexity of causing them.)

Amazing that this supply chain attack may have come from another supply chain attack.

1 Like

I think what we learned is that our data isn’t safe anywhere. At some point anything can be hacked and our data in perhaps in there. It makes me think twice before entering my data onto a website or filling in a form to receive a small discount card at the mall.


1 Like