How to measure employees development

All workers like to know where their career is going. When will they get a promotion? What do they need to do? What will they be making? What about the promotion after that? How long until they reach a certain earning goal?

A lot of companies will more or less wing it. They tell people they’ll get a promotion or raise when they’re ready. I once got a promotion, but no raise and my boss told me they wanted to try me out for six months before they invested anything else in me (I never did get a raise from that boss — not even well past the year mark). That kind of interaction is so frustrating.

On my current team we make it as transparent as we possible can. Every role in an employee’s progression is laid out, and each role lists years of experience expected, educational (including certification) requirements, as well as salary bands. I love it because there’s no guess work. All those questions in the first paragraph have simple, obvious answers (as long as they keep on top of their work and continue learning).

But I still feel like we could do better. My main concern is that our “Career Path” document is pretty simplistic, and it doesn’t tie directly to actual job activities.

Here’s an example (this isn’t exactly how we lay it out, I’m just putting something down to make a point):

  • Jr Cyber Security Analyst
    • 2 years IT, no Cyber specific experience required
    • 2 entry level certs (Sec+, Network+, etc.)
  • Cyber Security Analyst
    • 2 years cyber sepcific
    • Degree in cyber security OR mid level certifications (CySA+, Pentest+, etc.)
  • Sr Cyber Security Analyst
    • 5 years Cyber specific
    • (Degree and high level cert such as CISSP) OR (multiple high level certs) OR (advanced degree)

This is MILES better than “We’ll give you a promotion when you’re ready” because everyone knows exactly what to expect, and people can just focus on their growth instead of fretting about if they’re pleasing their mercurial boss.

But it still doesn’t tie directly to work tasks. Someone could get their CISSP, but still not be able to do basic work tasks like run a tabletop exercise, perform an incident post-mortem or use our SIEM to research an incident.

How can we create a clear progression, but one that is tied to work tasks? Something that is comprehensive, without being complicated?

Learning from Teachers

I’m part of a small sort of advisory group that works with a local community college on providing industry perspectives on their cybersecurity instruction. I love stuff like this because we can help the college know what we’re looking for, and we have a great opportunity to learn from some people who teach for a living.

At our last meeting I asked one of the members of the group who does curriculum design what book I should read to improve my own instruction and she said “Anything by Marzano.” So I picked up “The new Art and Science of Teaching” and have just been blown away by some of the techniques and research that has been done.

(SIDENOTE: as I read I couldn’t help think to myself “Man, my college professors could’ve learned a lot from this guy! I wish he’d been around when I was going to school!” Spoiler alert, he’s been around since the early 90s. Why don’t more teachers spend more time becoming better teachers when there’s such great resources out there??)

In the first chapter he outlines the following goal:

Students understand the progression of knowledge they are expected to master and where they are along that progression.

One of the tools he references to achieve this is called a “proficiency scale.” A proficiency scale is incredibly simple, but still much more powerful than a rubric or learning outcomes document or something. It’s a table that scores proficiency from 0 to 4 (including half points), with 4 being complete mastery. What’s interesting is that you only really need to fill in three sections — the rest refer to those sections. I guess the best way to explain it is to just demo it. Here’s an example (again, just random content but you get the idea):

4.0The student performs their own research (“Googling”) to find a tool to successfully crack WPA encryption
3.5In addition to 3.0 performance, the student has partial success at 4.0 content
3.0The student will:
-Brute force a password
-Use air snort to crack WEP encryption
2.5The student has no major errors with 2.0 content, and partial success at 3.0 content
2.0The student will recall specific hacker vocabulary such as lulz, hax0rs, 1337 and can perform specific processes such as:
-Trolling
-Meming
-Flame war(ing)
1.5The student has partial success at 2.0 content and major errors or omission regarding 3.0 content
1.0With help, the student has partial success at 2.0 and 3.0 content
0.5With help, the student has partial success at 2.0 content, but no success at 3.0 content
0.0Even with help, the student has no success

This tool is great because it takes away the ambiguity inherent in other methods of judging success. Can they do some 2.0 stuff and no 3.0 stuff? They’re not a “C-” or something, they’re a 1.5.

It also builds the scoring around actually doing things. A multiple choice test basically tests just one thing – recall. Can they remember terms and dates? Great, they’re the best student ever. This is harder for the teacher, but so much clearer and tied so much better to actual capabilities.

Using the Proficiency Scale in the workplace

Looking at that it’s obvious that you couldn’t make one scale per role. That would be giant and overwhelming.

Instead, I’m proposing you create one scale per tool, and tie the higher numbers to career progression.

Most organizations should have something like a product or service catalog. This is a big list that outlines every tool used in the enterprise (or at least every tool that the IT department supports). We have one of these, as well as a process catalog (with common processes like incident response, performing a post-mortem, etc.). You could make a scale for each tool and process that a specific team owns. When you break it down like that it’s not an overwhelming number.

Then you tie the performance to career progression. Let’s use the example of a helpdesk, which might have tier 1, 2 and 3 agents. There might be an entry in the product catalog for “Microsoft Windows” and a proficiency scale for windows could look something like this (greatly abbreviated)

4.0The employee can troubleshoot windows errors they haven’t encountered before with effective googling, and can write knowledgebases based on their experience
3.5In addition to 3.0 performance, the employee has partial success at 4.0 content
3.0The employee can:
-Troubleshoot network connectivity
-Connect and configure all commonly used hardware (such as docks, printers, scanners, etc.)
-Search for an find the correct knowledgebase article to solve a problem, and follow it
2.5The employee has no major errors with 2.0 content, and partial success at 3.0 content
2.0The employee is familiar with Windows specific terms such as “registry” and “MSConfig” and can perform the following tasks:
-Install windows on a new computer
-Join a windows computer to the domain
-Connect a windows computer to the network
1.5The employee has partial success at 2.0 content and major errors or omission regarding 3.0 content
1.0With help, the employee has partial success at 2.0 and 3.0 content
0.5With help, the employee has partial success at 2.0 content, but no success at 3.0 content
0.0Even with help, the employee has no success

This example uses the proficiency scale to outline the differences between tiers 1, 2 and 3. An employee would be Tier 1 from 0.0 to 2.5, Tier 2 at 3.0-2.5 and Tier 3 at 4.0.

Of course, they would only be at that tier level for 1 product. Ideally you would have a proficiency scale for each product and process. Let’s say you have 20 products/processes. Through standard work the employee would learn where they are in each scale (and their manager wouuld confirm it). In 1 on 1s they might review their progress across the 20 categories for any outliers, and create quick plans for them to improve in those products.

The manager could indicate what percentage of categories need to reach 3.0 for the employee to qualify for the promotion and raise to Tier 2 status — let’s say they need to be at 3.0 in 15, and at least 2.5 in the other 4. Maybe tier 3 requires more autonomy and expertise, and they need to be at 4.0 in everything before they qualify for that.

However the manager wants it, the advantages are huge. Employees know “the progression of knowledge they are expected to master and where they are along that progression.” And the measurements reflect work that actually must be performed, instead of industry standard certifications which likely include lots of information the employee won’t actually need.

There’s no question of “What should I work on” because it’s laid out in black and white. Although there are still questions of productivity, at least the manager knows that the employee has the knowledge to meaningfully contribute.

I think this model works especially well for something like a helpdesk, with discrete tools and processes, but it could work for basically any role where there are specific skills that need to be developed. I’m still working out this part, but it would be interesting to try and use with soft skills associated with management. You could have a scale around delegation, or developing employees, or even resolving conflict.

I also like that this framework makes you look at the knowledge that is actually required for a role. If you start writing it out and go “Man, I’m going to need a THOUSAND of these for this role!” then maybe that role has scope that is a little unreasonable. Maybe that role should actually be two different roles.

But we can use a simple process to put it in place for an IT department:

  1. Make sure you have accurate service/product and process catalogs
  2. Create scales for the 10-20 most important items in those catalogs
  3. Measure your team based on those scales
    • Do some team members score higher than their current role would imply? Do some score lower? Is expertise centralized in just a few people? Or does everyone know just a few tools?
  4. Make plans for individual improvement, and team makeup

Although it’s a decent chunk of work, it takes employee development and workforce planning and makes them more objective, more transparent, and, frankly, more doable. And aren’t those the most important things that managers do?


One response to “How to measure employees development”

Leave a comment