Common Mistakes when using OKRs in ML

While OKRs are some of the best tools to align a company, they must be implemented correctly. Here are a few common mistakes that cause OKRs to fail in machine learning projects. Many of these are common with OKRs in general, but some scenarios are particularly bad for ML teams.

Misaligning Talented Employees
Misalignment happens when teams or employees are not working on projects that move the needle on OKRs. Highly trained professionals spend years developing technical skills and knowledge. They don’t want to complete simple requests for leadership which are “somewhat” related to their field of data science. This happens surprisingly often and it’s one of the top reasons why data scientists are leaving organizations.

Ideally, companies want data scientists or machine learning engineers working on solving a core set of data problems. Analysts should perform technical work in an engagement model to support customers directly. You don’t want a specialized expert providing answers without context. You want to ensure that their time is spent on core tasks that contribute to the KPIs.

Setting Unreasonable Objectives
Leadership should be careful to set ambitious, but reasonable objectives. This is particularly true with cutting-edge technology like “machine learning”. Buzzwords attract customers and investors, so business owners want to capitalize on this. They might set bold objectives for machine learning projects which are very impractical or unachievable.

To combat this, leadership should aim to have a reasonable understanding of the capabilities and limitations of machine learning. Rather than focusing on “having ML in their organization”, they should ask themselves which aspect of their product or company they want to optimize. They should define objectives with ML managers, who then define metrics with key contributors.

A common benchmark for success is obtaining 70% of an OKR. Objectives should aim a little higher than what seems achievable today, but not too much. Keep in mind that ML is a process of exploration and outcomes are not guaranteed. This means you might not have accurate deadlines, but you can still define KPIs that are dependent on outcomes of ML projects.

e.g. Improve our ML recommendation engine until it achieves 90% customer satisfaction scores.

Doing “Business As Usual”
OKRs should not be used for doing “business as usual” either. This includes maintenance and recurring work (risk mitigation, administrative tasks, hiring procedures, server monitoring). While you can still use KPIs to stay above or below desired performance levels, you should only use OKRs when aiming to implement big changes or achieve bold objectives.

The good news: machine learning is one of the few tools that can redefine “business as usual”. ML models can find patterns in large amounts of data and automate decision-making. When done right, it can have a profound impact on KPIs. This forces leadership to think about the most impactful areas of the business.

Lack Of Transparency
Defining and pursuing OKRs should be a company-wide effort. But machine learning teams are commonly structured as a centralized team. Management should ensure that everyone has access to OKRs and the current status of KPIs. OKRs should also be regularly updated and reviewed. We recommend reviewing KPIs weekly and Objectives quarterly. This allows large organizations with isolated teams to remain flexible with their planning, and update objectives frequently based on changing market conditions.

Machine learning teams can also experience a lack of transparency due to distributed platforms and high turnover rates. This makes it harder for teams to track progress and reuse working models. In this case, a documentation tool would be helpful. Having a track record of ML experiments helps define KPIs in the future, by analyzing what worked and failed in the past.

How Vectice Can Help
Vectice captures the most valuable assets of AI/ML projects and stores them in one central place. Many ML assets (datasets, code, notebooks, models, runs) are often created across multiple teams. Vectice safeguards these assets and provides access to the right people at the greatest moment of impact. Outcomes are memorialized, annotated, searchable and reusable.

In the last of our 4-part series, we’ll discuss morals, ethics and safety for OKRs in ML projects.

Back to Blog
Login
Support
Documentation
Contact Us

Common Mistakes when using OKRs in ML

August 18, 2022

Table of content

While OKRs are some of the best tools to align a company, they must be implemented correctly. Here are a few common mistakes that cause OKRs to fail in machine learning projects. Many of these are common with OKRs in general, but some scenarios are particularly bad for ML teams.

Misaligning Talented Employees
Misalignment happens when teams or employees are not working on projects that move the needle on OKRs. Highly trained professionals spend years developing technical skills and knowledge. They don’t want to complete simple requests for leadership which are “somewhat” related to their field of data science. This happens surprisingly often and it’s one of the top reasons why data scientists are leaving organizations.

Ideally, companies want data scientists or machine learning engineers working on solving a core set of data problems. Analysts should perform technical work in an engagement model to support customers directly. You don’t want a specialized expert providing answers without context. You want to ensure that their time is spent on core tasks that contribute to the KPIs.

Setting Unreasonable Objectives
Leadership should be careful to set ambitious, but reasonable objectives. This is particularly true with cutting-edge technology like “machine learning”. Buzzwords attract customers and investors, so business owners want to capitalize on this. They might set bold objectives for machine learning projects which are very impractical or unachievable.

To combat this, leadership should aim to have a reasonable understanding of the capabilities and limitations of machine learning. Rather than focusing on “having ML in their organization”, they should ask themselves which aspect of their product or company they want to optimize. They should define objectives with ML managers, who then define metrics with key contributors.

A common benchmark for success is obtaining 70% of an OKR. Objectives should aim a little higher than what seems achievable today, but not too much. Keep in mind that ML is a process of exploration and outcomes are not guaranteed. This means you might not have accurate deadlines, but you can still define KPIs that are dependent on outcomes of ML projects.

e.g. Improve our ML recommendation engine until it achieves 90% customer satisfaction scores.

Doing “Business As Usual”
OKRs should not be used for doing “business as usual” either. This includes maintenance and recurring work (risk mitigation, administrative tasks, hiring procedures, server monitoring). While you can still use KPIs to stay above or below desired performance levels, you should only use OKRs when aiming to implement big changes or achieve bold objectives.

The good news: machine learning is one of the few tools that can redefine “business as usual”. ML models can find patterns in large amounts of data and automate decision-making. When done right, it can have a profound impact on KPIs. This forces leadership to think about the most impactful areas of the business.

Lack Of Transparency
Defining and pursuing OKRs should be a company-wide effort. But machine learning teams are commonly structured as a centralized team. Management should ensure that everyone has access to OKRs and the current status of KPIs. OKRs should also be regularly updated and reviewed. We recommend reviewing KPIs weekly and Objectives quarterly. This allows large organizations with isolated teams to remain flexible with their planning, and update objectives frequently based on changing market conditions.

Machine learning teams can also experience a lack of transparency due to distributed platforms and high turnover rates. This makes it harder for teams to track progress and reuse working models. In this case, a documentation tool would be helpful. Having a track record of ML experiments helps define KPIs in the future, by analyzing what worked and failed in the past.

How Vectice Can Help
Vectice captures the most valuable assets of AI/ML projects and stores them in one central place. Many ML assets (datasets, code, notebooks, models, runs) are often created across multiple teams. Vectice safeguards these assets and provides access to the right people at the greatest moment of impact. Outcomes are memorialized, annotated, searchable and reusable.

In the last of our 4-part series, we’ll discuss morals, ethics and safety for OKRs in ML projects.