Turning your WHY into ACTION

If you read my last post (You can’t get to How without Why) you should have a solid “WHY” for yourself or your team or product or whatever. If you haven’t read that post yet, I recommend reading it before continuing here.

GREAT! You’ve got your why. What do you do with it?

In this post we’ll talk about how to turn a WHY into action. We’ll be leaning pretty heavily on a few resources and I want to highlight them up front. The basic process is this:

  1. Figure out WHY you’re doing something (we did this!)
    • Based largely on “Good Strategy/Bad Strategy” by Richard Rumelt and The Toyota Way by Jeffrey Liker
  2. Figure out a challenge or Target Condition
    • Steps 2, 3, 4 and 6 are Based Largely on Toyota Kata by Mike Rother
    • This step also includes a little bit of “Good Strategy/Bad Strategy”
  3. Grasping the Current Condition (or figuring out “What’s Going On Here”)
  4. Finding gaps between current condition and target condition
  5. Creating metrics to measure your progress
    • This is based on both “How to Measure Anything” by Douglas Hubbard” and “The Balanced Scorecard” by Kaplan and Norton with a dash of “The High Velocity Edge” by Spear
  6. Experimenting your way to your target condition
  7. Now rinse and repeat!

By the end of this post you’ll have a basic idea of how to turn your intent into action (as well as a good list of other resources to deepen your skills). But before we get into the process, let’s talk about how I learned the need for a disciplined improvement process.

The story of, well, literally every call center

I worked in a call center helpdesk while I was in college. It was my first experience in a large IT department as well as a large call center. What was funny to me was how bureaucratic everything was. Coming from working almost exclusively in small businesses, it was quite a shock.

For example, when I worked for a one man computer shop if I wanted a raise I would say “Hey, do you think I can get a raise?” and my boss would think for a minute and go “Uh … sure!” or he’d go “We can’t afford it right now.” And that was that.

At this helpdesk you got a raise every quarter, up to 50 cents, and the size of the raise was based on a bunch of metrics. I don’t remember the exact formula, but it might’ve looked something like this:

  • $0.10 based on your first call resolution
  • $0.10 based on attendance
  • $0.10 based on call length
  • $0.10 based on ticket completion
  • $0.10 based on contributions to the knowledgebase

For each of those items it wasn’t a specific number, generally your raise was based on your performance compared to your peers, with specific weighting based on quarterly goals. I was told right away by a manager that NO ONE got the full fifty cent raise, but if I worked hard I could maybe get the average of 37 cents a quarter.

(not to brag, but I got the full fifty cent raise TWICE. I guess that is literally solely to brag. Let me rephrase:

To brag: I got the full fifty cent raise TWICE)

Every quarter they would send out a communication indicating what the metrics were going to be the focus for the next three months. Maybe they would focus on call length. What do you think happened then?

Well, there was a sudden, unexplained uptick in calls that got disconnected after just a minute or two of work. Or people were told to call back later. Or calls got escalated to a higher level team with no real work done on them.

In the end, calls went through the roof, resolution time did similarly, and satisfaction plummeted.

Clearly that metric was causing more harm than good, so they’d switch to a different one. First call resolution!

All of a sudden people would claim resolution even if the issue wasn’t resolved. They’d tell the user “oh yeah, just restart your router. Call back if it doesn’t work,” then they’d hang up on them and mark “Call resolved.” And when the user called back either it was someone else’s problem, or they’d give them similarly curt direction and tell them to try calling back again if that didn’t resolve it.

Ticket volume went up, satisfaction went down, and another metric was doing more harm than good.

This repeated every quarter for years. The real question wasn’t how each metric would improve our service, it was how people would game it.

This isn’t unique to my place of employment. I would guess every call center faces similar pressures. Most have taken to trying to measure customer satisfaction, which is why every call with a call center now ends with “Great, if there’s nothing else then my name was Dave. There will be a short survey at the end of this call. It would mean a lot to me personally if you could give me five stars. Thank you and goodnight.”

Just another way to game the metrics.

Clearly management knew that things could be better at our helpdesk. And they continuously tried to improve them! That’s not a bad thing at all.

But it didn’t work. After a few years they wound up bringing in some consultants to propose some improvements and wound up completely redesigning the structure of the organization.

Why didn’t it work? Well, a few reasons, but we’re going to start with the fact that they didn’t know exactly where they wanted to go. Even more importantly, if they did know, they didn’t tell the rest of us, which makes the same difference. So how do you figure out where you’re trying to get to?

Defining a challenge

Toyota Kata has this fantastic diagram they use to illustrate the Improvement Kata (or the pattern Toyota follows to improve their work):

Note the order that things are labeled in. You start by getting a direction or challenge.

Challenge is an interesting word! As they describe it in “The Toyota Kata Practice Guide:”

An overarching strategic challenge is an actual destination that’s a distinctive and concrete value proposition related to better serving the customer. It’s a picture of success — a description of a new level or pattern of performance that will differentiate your organization’s offering from other offerings — that lies six months to three years in the future. A challenge is:

  • A clearly described new customer experience that you would like to offer …
  • Something you can’t yet achieve with your current system and processes.
  • Not easy, but not impossible. It’s achievable, but we don’t know how yet …
  • It’s measurable, so you can know if you are there or not.

The “Challenge” is the future state. It’s where you would love to get to — and you can only really create a future state when you fully understand your WHY. But when you understand that why? A future state should be relatively easy to envision.

Let’s go back to the helpdesk I was talking about. Why did it exist? Well, they served a variety of clients, a large portion of which were elderly, who were connected to a non-profit. They often supported them in setting up a handful of very specific pieces of hardware (think a single model of router), or in answering general IT questions (how do I turn on the computer? How do I print?).

If you were to bottle that up in a one phrase “why” it might be “we provide timely service to users of all technical levels to resolve technology problems both large and small.”

With that why clearly understood you can envision an ideal interaction, which helps create a target condition. For example, you might picture a college student immediately answering a call from a retired person who is in a panic because the network is down. The college student calmly listens, empathizes, gets contact info, determines that the network really is down from server side tests, and then immediately does a warm transfer to a higher level technician who is equipped to solve their problem.

That sounds delightful, doesn’t it? That very generalized description is our challenge. We can double check that it’s a good challenge using our four bullet point questionnaire from above:

  • Does it clearly describe a new customer experience you would like to offer? YES
  • Is is something you can’t yet achieve (or don’t yet achieve) with your current system and processes? YES (probably, more on that in a second)
  • Is it not easy, but not impossible? YES (probably, more on that in a second as well)
  • Is it measurable? (more on this later)

Grasping the current condition

Since we know how the helpdesk would function in a perfect world, it’s time to see if that’s how it’s functioning right now. The best way to do that is go and watch the work yourself.

See, in The Toyota Way (and other lean books) they make a distinction between data and facts. Data are numbers that represent something, but facts are things you see with your own eyes.

You may receive data that says someone answered every single call in a helpdesk on the first ring. But when you go look with your eyes you’ll see that the agent is actually busy playing a video game, so when someone calls they pick up the phone, and then take thirty seconds unlocking their computer, putting on their headset, and getting ready — all while the customer listens to dead air.

The data and the facts are both accurate depictions of what is happening, but while one shows someone doing their job, the other shows a potential cause for concern.

SO! Go and observe. If the leadership at the call center had been observing us they would’ve seen the behaviors highlighted above — they would’ve seen team members gaming the system. They could’ve used that to create a “current state” description that might’ve looked something like this:

“A disengaged college student answers the phone. They listen to a panicked senior talk about how something is wrong with their building and the internet is down. They do their best to get off the phone as quickly as possible — they tell them to restart the router, they tell them to call their ISP, or they escalate to a team that may or may not be able to help them.”

This is a more accurate picture of how calls usually went. It’s not flattering. It would probably make management (and employees, for that matter) feel pretty uncomfortable. BUT IT IS ACCURATE. It is FACTUAL. And it tells a complete story that the data could never tell you.

That’s the main problem with data — it can be interpreted in many different ways. It is useful, but it shouldn’t be relied on.

Finding gaps

Now we have a complete picture. We have our target condition, and the current condition. They look like this:

Current ConditionTarget Condition
A disengaged college student answers the phone. They listen to a panicked senior talk about how something is wrong with their building and the internet is down. They do their best to get off the phone as quickly as possible — they tell them to restart the router, they tell them to call their ISP, or they escalate to a team that may or may not be able to help them.A college student immediately answers a call from a retired person who is in a panic because the network is down. The college student calmly listens, empathizes, gets contact info, determines that the network really is down from server side tests, and then immediately does a warm transfer to a higher level technician who is equipped to solve their problem.

We can now use our four bullet point questionnaire more completely:

  • Does it clearly describe a new customer experience you would like to offer? YES
  • Is is something you can’t yet achieve (or don’t yet achieve) with your current system and processes? YES, there is clearly a wide gap between our current and target conditions
  • Is it not easy, but not impossible? YES, although it will take a lot of effort there’s no reason to assume your employees are incapable of achieving the target condition
  • Is it measurable? GREAT QUESTION! More on that in a second

We are finally prepared to measure gaps. Now, this is a very person-centric process. I chose that because I want to make clear that you can identify and measure gaps in processes that involve people just as easily as you can in engineering problems.

Looking at these two paragraphs, what are the main gaps we see? Some of these are things we can measure, some of them we can’t (or don’t care to), so we’re not focusing on that yet. We’re just asking ourselves “What are the gaps between our current and target conditions?”

Take out a piece of paper and write a list down. Seriously! Take a minute to actually write down a list of all the gaps you see. I’ll wait. When you’re done, come back and compare your list to mine.

  • The attitude they take when answering the call
  • The way they greet the customer
  • The lack of listening skills
  • The lack of empathizing and restatement
  • The lack of documentation of crucial information
  • The lack of taking proactive steps to determine if the customer’s own analysis is right
  • The correct routing of the call to the proper escalation resource
  • Performing a warm transfer instead of a cold transfer

Hopefully your list and mine overlap a little bit. I understand that my very brief paragraphs above didn’t provide a lot of specificity that we could dig into, but ideally what you’ve seen is that even a relatively short current condition and target condition description is more useful than none at all.

What’s interesting is that, even with our list of gaps, the work isn’t done — not even close to it. That’s because each of those gaps may have their own reason. Now you need to dig into each gap, which will likely involve more talking to people, more observation, and more data.

Let’s look at a simple one — the way they greet the customer. Maybe your helpdesk (as the one I worked at) has a script for answering a call. If there’s already a standard then you need to talk to people and find out why they don’t follow it. Are they not aware of it? Is it too clunky? Does it feel disingenuous or ask them to say things they would prefer not to? Does it take too long (remember, call time is a metric they measured)?

Probably, as you talk to people, you’ll find a mixture of reasons. Maybe some were never trained correctly, while others just don’t like the language, and some find that if they follow it their calls end up way too long.

Every bullet point on our list is similarly complicated. There may be multiple reasons, and there may be multiple solutions. But you can’t get started on solutions until you find a definitive way to measure progress towards your target state. So let’s start making some measurements.

FINALLY, how to make good metrics

In Douglas Hubbard’s “How to Measure Anything” he talks about how many managers would work with his group and immediately want to jump into measurement and modeling. He told them time and time again that before they measured anything, they needed to know what decision the measurement was supporting. As he put it:

If manager’s can’t identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value.

The reason to measure anything is to take action — to make a decision. In fact, Hubbard proposes five things you need to know before you even start thinking about taking a measurement:

  • What is the decision this measurement will support?
  • What is the definition of the thing being measured in terms of observable consequences and how, exactly, does this thing matter to the decision being asked?
  • How much do you know about it now?
  • How does uncertainty about this variable create risk for the decision?
  • What is the value of additional information?

Of his five pieces of information, three of them directly mention the related decision, and two allude to it. Defining the decision you wish to support is more important than taking measurements, because without that the measurements don’t matter!

Luckily (sort of) for us, our decision is always going to look relatively similar to this:

“Is this change we made to processes/procedure/whatever bringing us closer to our target goal, and should we maintain, expand, or discontinue the change?”

Unluckily, deciding on metrics can still be quite complicated. Going back to our example, how would we measure someone’s empathy?

Hubbard makes another really good point here. The goal of measurement isn’t necessarily to quantify something exactly — in many cases that’s impossible anyway. Instead the goal is to reduce uncertainty.

Looking at Empathy, you may not be able to create a quantifiable “Empathy Quotient” that indicates on a scale of 0-100 exactly how empathetic someone is.

But you CAN look at someone and say “You were more empathetic on this call than the last one I listened to.” Or you can look at two people and say “Oh yeah, that person is more empathetic than this other one.”

Even though we balk at creating exact scales, we know intuitively that you can measure Empathy. Just not with the most exact of accuracy.

But you don’t NEED exact accuracy. You just need to know if the change is improving things, keeping them the same, or making them worse.

So look at your list of gaps and ask yourself “What would I need to observe to know that things are improving, staying the same, or getting worse? What changes would I expect to see?” From that starting point, start narrowing things down to something more quantifiable.

In the interest of brevity, I’m just going to list some general guidelines for making measurements drawn from several sources, including Hubbard’s book, The Toyota Way and Toyota Kata, as well as the balanced scorecard.

  • There are both process metrics, as well as outcome metrics. It’s OK to track outcome metrics as a manager, but you should primarily incentivize and motivate people to strive for process metrics. As they say at Toyota “Right Process leads to Right Results.”
  • No metric is an island — they will undoubtedly impact other metrics. That’s why it’s helpful to track both metrics you want to change, as well as metrics that SHOULDN’T change. For example, you might track call times after implementing a new welcome script, but also track satisfaction levels and disconnects to see if the change has unintended consequences.
  • Related: to get a full picture you need more than one metric, but too many is just confusing. In “The Perfect Scorecard” they recommend four “perspectives” to cover your bases: financial, customer, internal business process, and learning and growth. You don’t need to use those four, but they’re a good indicator of the conceptual distance between your measurements, as well as how many areas of focus is reasonable.
  • Metrics don’t need to be perfect, but they need to reduce uncertainty enough to enable decision making.
    • Hubbard has some great rules of thumb for this, such as the “rule of five” (there’s a 93% chance that the median for a group will fall between the highest and lowest numbers in a sample of just five, no matter the group size) or what he called the “Urn of Mystery” (basically, there’s a 75% chance that just a SINGLE sample is indicative of the majority)
  • Five whys aren’t just useful for finding out why, they’re useful for tracking down metrics that actually matter
  • Metrics support experimentation, and thus they are an experiment themselves. Don’t be afraid to change.

The final bullet point is too long for a bullet point, so we’ll have a little subsection about it right NOW

The most common metric mistake

When we look at my example of the call center above, I think they made one main mistake that many organizations make. They didn’t explain the metrics. Most importantly, they didn’t tie them back to the future state.

People aren’t automatons. You can’t tweak a dial or lever and see what happens. If you change processes and tell them they’re being measured on something, they want to know what is changing and, most importantly, why.

If you are going to start measuring something, especially if you are going to incentivize people in some way, you MUST explain to them why you’re doing it. You have to paint a picture of the future state for them that they can get behind.

When you do this people will be more willing to try, and also less eager to game the system — especially if you clarify that you’re just trying an experiment to see if it works. With that framing people will feel empowered to come back to you with feedback, and to make sure their reporting of the metrics reflects reality.

Let’s look at our process one more time:

  1. Figure out WHY you’re doing something (we did this!)
    • Based largely on “Good Strategy/Bad Strategy” by Richard Rumelt and The Toyota Way by Jeffrey Liker
  2. Figure out a challenge or Target Condition
    • Steps 2, 3, 4 and 6 are Based Largely on Toyota Kata by Mike Rother
    • This step also includes a little bit of “Good Strategy/Bad Strategy”
  3. Grasping the Current Condition (or figuring out “What’s Going On Here”)
  4. Finding gaps between current condition and target condition
  5. Creating metrics to measure your progress
    • This is based on both “How to Measure Anything” by Douglas Hubbard” and “The Balanced Scorecard” by Kaplan and Norton with a dash of “The High Velocity Edge” by Spear
  6. Experimenting your way to your target condition
  7. Now rinse and repeat!

Let’s use the example of the call center. Ideally they would’ve defined and communicated to us repeatedly WHY we existed — why the work we did mattered.

They would’ve told us that we had room for improvement, and given us a clear picture of what that improvement would look like by setting a clear, achievable and inspiring target condition.

They would’ve understood our current condition, measured the gaps, and made plans on how to close them, complete with measurements to help them know if they were getting closer to the target condition or not.

They would’ve TOLD us about those experiments, and the effect they were hoping to witness. Then they would’ve clearly communicated with us how things went compared to how they hoped so we would know what worked well and will stick around, and what didn’t and can be discarded.

And then we would’ve cycled through, experiment after experiment, working together as a team to get all of us closer to our target condition. In that case our target condition might’ve involved improving the lives of senior citizens volunteering for a non-profit. What’s not to love about that??

That is a very basic outline of how to turn your intent into action. But this is a subject that you could study for years and keep making better, so I really recommend the following for you (in the order I think you should read them):

  • The Toyota Way
  • Toyota Kata
  • How to Measure Anything
  • The High Velocity Edge
  • The Balanced Scorecard

Leave a comment