OKRs are all about measurement, which is what makes them so powerful.  Being passionate about data-driven decision making, I was fascinated by Rick Klau's video about OKRs at Google.  The following are my thoughts on measuring, grading and reviewing OKRs.

How can we measure OKRs?  An OKR consists of an Objective and Key Results.  Each Key Result needs to be measurable and have a target.  The score for each Key Result should fall between 0.0 to 1.0 and is the target divided by the outcome.  To get a score for the whole OKR, average out the scores of all its Key Results

If we set arbitrary OKRs with vague Objectives and Key Results, we could end up with a set of Key Results that can't be completed and Objectives that have an unknown impact on the business.  Let's dive deeper into scoring and grading Key Results and then, using the performance as feedback, discuss how to tune not only the Key Results but also the Objectives.

Measuring and Grading OKRs

OKRs are like experiments where we propose the hypothesis:

"If we succeed actioning the following Key Results, then we should see the Objective move up/down by X".

To run an experiment successfully, we need to be able to measure the inputs, Key Results, and output, Objective.

Each Key Result has to be something that the person, team, department or company doing the OKR can work on directly.  The Key Result must be measurable, and a target set for the Key Result before the OKR is shared, i.e. The sales team will 1000 leads by the end of Q1.  Once a Key Result is established, then its performance can be monitored and, during a review, graded.

Klau recommends scoring Key Results between 0.0 to 1.0, but that is not mandatory, and 0% to 100% or A to F are also excellent options.  One can score Key Results using whatever format and style best suits the organisation.

To calculate a score, we divide the target set by the actual result.  So if the sales team only managed to call 300 leads and the goal on their Key Result was 1000, then they would get a score of .3, if they called 700, the team would get .7.  Once we have a score, we can then grade the Key Result:

Score Grade
0.6 to 1.0 Good
0.4 to 0.6 Pass
below 0.4 Review

When setting a target for a Key Result, reaching a 0.6-0.7 should feel like it is a stretch.  If a team/individual is averaging a score of 0.6-0.7 on their Key Results, then they are probably doing an excellent job taking appropriate risks and pushing themselves.  There will be a few Key Results which they kicked out of the park (0.8 or 0.9) and a few that didn't work (0.2-0.3 or even 0.0).  Unless the Key Result is binary in nature, i.e. Launch the website by date X or Release product A by close of Q3, then getting lots of 1.0s is a smell.

Are people or teams setting Key Result targets low so they can comfortably meet them?  If so, why?  Are Key Results being used for individual performance assessment?  Organisations should not weaponise OKRs.  Use another tool and different criteria to manage performance and keep OKRs as a safe place for individuals and teams to be brave, experiment and fail.

An individual can be very conscientious and be part of a high functioning team, but the individual and the group may still fail to hit their OKRs.  A person put in heaps of extra hours and helped the company through a series of critical incidents in the last quarter, which meant they didn't have the time to knock out all their OKRs.

Reviewing ORKs

If you or the team didn't get a good score on a Key Result, don't despair.  Ask:  What went wrong?  Did we have any blockers or issues beyond our control?  How else could we have done it?  What else can we do?  OKRs are like experiments, so failure is a learning opportunity and should be applauded rather than condemned.

To help foster a safe environment, publicly review company and team OKRs results quarterly.  Get owners of OKRs such as Head of Engineering, Head of Product and Team Leads to explain their grades and any adjustments they want to do for next quarter:

  • What grade did we get?
  • Why we got the grade?
  • What we are going to do differently next quarter and why?
  • What did we learn?

Determining an OKR's Success

Rick Klau explains a way one can assess a whole OKR is to average the Key Results scores that belong to an Objective.  Averaging the Key Results only tells us how well we completed the Key Results for an Objective.  It is entirely plausible that one can knock all the Key Results out of the park but still fail to move the Objective.

Objectives are measurable outcomes that one wants to influence but can't control directly, i.e. increase signups by 1000 per month, raise revenue by 3% or lower churn to 15%.  Key Results are quantifiable activities we can directly control, which we think may be able to influence the Objective. i.e. reduce page load time to below 2 seconds or resolve 90% of customer enquiries in under 10 minutes.

Therefore, even if you have completed the Key Results of an OKR, you need also to review the Objective.  The Key Results may not have affected the Objective, or the Key Results didn't have enough of an impact for the money and effort invested.

If the company has an Objective that spans across multiple quarters like "Increase revenue by X" then one can try different Key Results in each quarter and learn what kind of activities help move the revenue up and by how much.  Remember that useful Key Results may not be successful all the time.

Some Key Results may be seasonal, i.e. trying to sell ski gear in the middle of summer may not yield great results.  Other Key Results may have reached points of saturation, i.e. reducing the shopping cart load time from 30 seconds to 2 helped with the conversion rate last quarter but continuing to cut it to 1 second may not help that much.

Developing Quantifiable Objectives

If a company has a vague Objective like "Improve our company's reputation" how can people reviewing the OKR tell if the company's reputation has improved or not?  Brainstorming "How could we measure our reputation?" may help generate some quantifiable Objectives like "Improve our NPS (Net Promoter Score) from -63 to -45" or "Reduce registered complains from 2000 a month to 1000."

Having an Objective that clearly states a metric as well as current and desired target for that metric, makes it is easy to tell how we are tracking.  "Looks like this month we will be down to 1200 complains!"   Remember always to feel free to question an Objective.  Even if the company hit all the Key Results and saw the NPS rise to -45, but employees agree that when they mention the company's name at parties people still try and spit on them, then review the Objective.

  • Are we using the right metric?  i.e. NPS may not be appropriate for understanding our customer's sentiments.
  • Do we need to be more patient or aggressive with our goal? i.e. an NPS score of -45 is still not great; maybe people will stop avoiding employees at parties when we get NPS to 45.
  • Are we measuring the metric correctly? i.e. We are only surveying NPS on customers who contact the sales department and not customer service

Measuring and reviewing both the Key Results and the Objectives can help us answer several questions:

  • How well did we go executing the Key Results?
  • Did achieving the Key Results influence the Objective?
  • Did we have a useful/meaningful Objective?

How many OKRs should you have? You should have 3-5 OKRs and 3-5 Key Results under each OKR.  Remembering that OKRs are reviewed and set every quarter (3 months), this would give you 9 to 15 Key Results to work through.  More OKRs/Key Results would make it difficult to focus, and fewer would probably not be as engaging.

What's the difference between OKR and KPI?  KPIs are a set of basic metrics that help determine the performance of an employee or team.  Think of a dashboard in a car.  OKRs are a tool for trying to translate the strategic goals of the organisation into activities that individuals and groups can undertake.  Think of taking the car on a family holiday.

What is OKR tracking? OKRs are reviewed every quarter (3 months) but should be tracked more often.  Personal OKRs can be tracked at every 1:1 and team OKRs at regular meetings like Retros.  If an issue is found when tracking an OKR, the OKR can be adjusted immediately, before the next review/creation cycle.