Competency Matrix

Our company wants to move toward implementing a competency matrix for the testing department. I have been Googling and have found many definitions for a competency matrix. From what I have found, there is an unclear delineation between a competency matrix and a skills matrix. Some sources treat them the same while other attempt to define them as different. So far I am leaning toward the idea that skills are elements of a competency. So, one competency may have one or many skills. I don’t know if my take on this is correct or not so I am asking for some help to add clarity to the question of what constitutes a competency matrix. Also, if anyone has examples of a testing competency matrix, I’d be appreciative if it would be posted in this discussion.

Thanks in advance.

1 Like

Hello @cdebro Chuck D and Welcome!

Great question! I agree with you that skills are elements or properties of competencies (that is, a competency may be a collection of skills; a tester may have several competencies).

We explored a list of competencies for the testing job family a while back. The near final version listed competencies in the first column and cognitive behavioral requirements (from Bloom’s Taxonomy for Cognition) across the top. In that manner, we could define levels of a behavior for each competency which helped parse skills and capabilities into a certain level within the Tester job family.


My team maintain and use one, I won’t be able to post it here (company confidential) but can tell you we have a range of competencies in the following categories: Innovation, Core Competencies, Tooling, Problem Solving, Knowledge and Understanding, Interaction, Performance (as in, performance testing - a big consideration with our product hence it getting singled out).

Under Core Competencies (for instance) we have subcategories for: 3 Amigos, Backlog Grooming, Planning, Stand Ups, Test Planning, Test Review, Defect creation, Test execution

And then under “Test Planning” (for example) we have bandings for Jr, Mid, Senior and Lead testers - we expect for example a Senior to be able to do what a Jr and Mid to be able to do, and more:

Junior: Creates clear test cases which follow good practices, within test management tool.

Mid: Writes good test cases, able to be a coach for what good looks like, Able to bring in other styles of creating test cases.

Senior: Mentors other members of the team/business on test case creation.

Lead: Drives and maintains standards for test cases with the wider community of practice. Actively works to introduce new ideas and approaches to test planning within the testing discipline.

So… for each major area of the job, we have a group of competencies, and for each competency, we have a range of “levels” which delineate our expectations for each tier of the tester career path. We don’t expect everyone to be at “the right level”, people will be better at some and worse at others. We also actively update this document several times a year (or as needed), as a community of practice.

The good thing about this is, it’s clear what the expectation for your role is, as well as how this is different to both more junior and more senior roles, so you can see what good looks like, and also what “below the expectation for your role” looks like. The bad thing is, it’s very concrete and can’t really cater for every individual, situation or application of competency. For example, some of my teams don’t run 3 Amigos. How can a tester in that team expect to do well at that part of the framework? So in the end we err on the side of including more than we need and accept for some things, it’s a N/A.

We definitely don’t maintain separate sets of skills and competencies, sounds like that would be a pain for the employee. Whether or not skills are elements of a competency isn’t important, in practical terms. What do your employees need to be doing, to be doing a good job? For me, that’s the point of the competency framework.


Welcome to the community Chuck,
Back in time I designed a model based on the agile quadrants - that might interest you.

(Previously an mot article, now only fully the web archives)

I’m in the process of establishing competency model in my team. Have currently 31 general skills and then additional 2-4 skills per tester specialization (manager, tech, art etc.)

I’ve devided the general skills into QA, Tech Knowledge and Interpersonal skills.

QA has:

  • Game Testing (bug managment, qa knwoledge, game dev knowledge)
  • Test Managment (Design, execution, scope and planning)
  • QA Analyst (Documentation, Requirement Analysis, QA and QA Effort)
  • Embedded QA (Team PArticipiation, Team Responsible, Team Embedded, Team Meetings)


  • QA Tools
  • Dev Tools
  • Test environments
  • Code Understanding


  • Communication
  • Feedback
  • Team Building
  • Production
  • QA Traits

Every skill is measured on a scale 1-4 based on situational leadership. Every postition (from junior to senior) has a level such skill is required i.e. Junior - Bug Managment - L1.

Every Tester has a “character sheet” on which we assess together (manager, hr and tester) on what level each skill is. Then we decide which skills we want to develop in upcomming months.

What was the most important part in making it?

  • clear definitions of skills and levels (super important to have it clearly defined)
  • I’ve started with bigger amount of skills but trimmed down the number to not make it too detailed (I think even 31 is too much but probably during the run of the whole process we will be able to trim down the number - already we saw that some skills repeat themselves in scope)
  • not making hard borders - people are not characters in a game. Some testers will have clear level of a skill some will be hazy and fuzzy and that’s natural.

What are the advantages of that system?

  • Allows tester to see what is required from them
  • Feedback sessions are more constructive and it is a discussion (tester now have knowledge what it means to be a tester and what is the skill set)
  • It removes as much of subjectivity from evaluation as possible (without such matrix evaluation is very subjective and I think it is not fair

What are the disadvantages?

  • creating and starting such system is very resource and time consuming.
  • it requires a lot of creativity in setting standards and definitions (programming and arts are more defined and skillsets are more or else defined - QA does not have that)
  • may tempt testers to try to game the system (thats why fuzzy and hazy must be preserved)
  • such system is not embedded into the company evaluation process so, in my case, my matrix is done alongside the company system

Hoped my experience was helpful : )


One of the important things when doing this, I feel, is that it isn’t a cast iron thing. We have one, but it is used more to suggest what areas people can improve one, and what we expect people to be able to do before we consider that they are at various stages of personal growth within the business. We also believe that people may see a skill or knowledge outside of the matrix that will benefit the company and want them to feel they can learn that at the expense of something within the matrix if it benefits the company. In short, it should be suggestive rather than restrictive. Also, they’re bloody difficult to write, in my opinion…


This post has some interesting thoughts in regards to skill mapping, ans rating people -

An excerpt from the page, which stands out for me:

What is your skill level as a product manager?

  1. I am an expert (stand up)
  2. I need some training (hand up)
  3. I need full training (sit down)

Some people stood up, more put their hands up and more again stayed seated. As you might expect.

So I asked again, in a different way.
I told them that they would only get a pay rise if they said they were an expert.
A lot more people stood up.

So I asked again, this time saying they wouldn’t get any training unless they say they need it now
As you might imagine, this changed the answers once more and now we had a lot fewer experts.

I’ve recently been involved with rating myself against tools and applications, from 0-3, and when I push for tangiable examples of each, I get given what each score means in general, with 3 being you not only know it but could train people on it.
But what if you know some parts of something, could you not train on what you know, even if you don’t consider yourself a master? I couldn’t explain what all the different syntax are with SQL, but that doesn’t mean I couldn’t teach someone how to do a basic select query.


Definitely agree with this. Someone could be doing better or worse in a number of areas and still be in the correct “bracket” - however I do believe the matrix should be general enough that the majority of people are in the correct bracket for most things, or the expectation is out of step with the reality of the roles. One way to achieve this is to set expectations realistically based on behaviours and skills the team actually has - sure, in some orgs certain things may be expected at entry level, and in others they may be “lead” capabilities, so this is one reason I’m against carrying the same matrix between organisations.

Also, it’s important to review these (at least) year on year to check where bars have raised or lowered depending on a changing landscape in the org, new hires etc etc.

The framework should only drive standards by showing what “better” looks like, and letting people reach it - this year’s expectation of a Lead shouldn’t become next year’s expectation of a Senior, for instance. Changes and revisions to the matrix should be reviewed by the team it’s being applied to and you shouldn’t introduce a competency if everyone is going to come out really low on it, in order to motivate them to improve.

The description of “showing what “better” looks like” is spot on. To this end, it is important to share the matrix.

I’m thinking out loud here, but it may be worth each staff member having access to a spreadsheet with two tabs - one that they “colour in” themselves and one that the head of the department fills out. That might create anxiety. Maybe they just have one they colour in themselves, and the head of the department has one that they then use to make comparisons during catch ups. I’ll come back on this one…