Jay Badenhope (Senior Product Manager of JustAnswer) leads product teams that experiment and build things to solve meaningful customer problems. I reached out to him to chat about his approach to implementing lean startup in large- and mid-size companies. You can find him on Twitter or LinkedIn)
Q: What is the difference between running lean in a very large vs. a mid-sized company? How have you adapted for corporate startups?
A: In my experience, there is a difference but it’s not night and day. I’ve led lean product teams at Intuit and JustAnswer. Both companies have established products and business models for making money from selling those products. To paraphrase Steve Blank, a company is responsible for executing on a proven business model, unlike a startup that is seeking to establish a new business model. I’ve led product teams that were responsible for testing ideas for new business models within these two profitable companies. The main difference I’ve seen is that larger companies tend to be more conservative in supporting the existing business model and smaller companies may be hungrier to try new things.
In terms of adapting to different environments, a question I’m thinking about now is how to keep a product team and executives aligned as we get deeper into testing new business models. Starting has felt relatively easy, because folks are filled with optimism, have a vision that may feel clear, and are patient to trust a process of rapid experimentation.
It’s a honeymoon period. Seeing hypotheses validated or even invalidated feels like progress. But what do you do if 6-9 months into the new product journey you are learning, have ruled out some ideas, but don’t have data pointing to a big winner and management is starting to get impatient for revenue?
That’s a tough spot to be in as a product manager, and I know other product managers who have been there, too. I’ve had some success discussing goals with executives that highlight the learning and where it suggests we go next, but I’m also looking for better ways to set realistic expectations from the beginning.
Q: How should lean be applied (and managed) across the three horizons of growth? In other words, is “running lean” different when we’re working on a core product versus a horizon two or three product?
It’s especially important to choose the right metric for the situation. The most compelling metric for executives in established companies is often revenue, but that’s not always the right metric in all three horizons.
In theory, the metrics should line up something like this:
- horizon one product teams are managing profit growth for a product with proven revenue
- horizon two product teams are managing revenue growth with proven product-market fit
- horizon three product teams are validating product-market fit for a new idea
In reality, I’ve seen horizon three product teams (and other experimental efforts like social media marketing teams) have revenue goals before they’ve proven product-market fit with a customer who is willing and able to pay for a solution to a poorly-met need. It’s not hard to understand why. Executives overseeing mature and early products are spending most of their time on the mature products. It’s more familiar and consistent for them to use a mature product metric of revenue on all their teams.
A harder but more useful way to measure horizon three product results is choosing metrics that predict growth, address the riskiest unproven assumptions in the idea, and give product teams clarity on where to set the bar on results in running their first experiments. The book Lean Analytics does a great job advocating for metrics that are ratios.
Another challenge is using the right timeframe. Established companies usually work with 12-month plans. I’ve seen that work well for mature products in stable markets, such as when I was working on credit products at Wells Fargo. There’s so much change on startup projects, however, that their plans become outdated after 3-6 months.
In my current work at JustAnswer, I’m excited to see us use quarterly goals going into 2015. We’re using the Objectives and Key Metrics (OKR) approach with goals negotiated between managers and teams with quarterly check-ins. (Here’s a video on how Google sets OKRs.) I expect we’ll have better communications and react more quickly to what we learn.
Q: What should be taken into consideration when managing timeframes for lean startup projects?
I’ll address that first in terms of horizon three projects. Innovation would be easier if we could put breakthrough discoveries on our calendars. It sounds silly to put it that way, but lean startup projects get into trouble when expectations are missed. Those expectations grow as time passes. Here are some questions that can help make executives’ and product teams’ assumptions explicit and addressable:
- Which customer do we most want to serve? Which of their problems do we most want to address? Which solution are we most excited to test with these customers?
- What are all of our unproven assumptions behind this business idea? Which assumptions will we make time to test?
- If our initial assumptions are proven incorrect, is there a minimum or maximum amount of time we want to invest in the idea before moving on to something else? Would we rather give up too early or too late on an idea?
- For the business model we will test, which elements (customer type, value proposition, marketing channel, etc.) are most important to hold constant and which are we more willing to change based on what we learn?
- How will we feel in 3, 6, or 12 months if we learn a lot but don’t yet have a proven business model we want to scale? What can we do to increase our chances of feeling good about this work regardless of the customer behavior we discover?
- If we validate our riskiest assumptions with lean experiments, what do we imagine a real product might look like? What resources will we commit to build the real product?
I’ve used some of these questions with executives and, in hindsight, wished I’d asked others.
Q: With 2014 wrapped up, what achievements did your product teams make? What would you like to focus on in 2015?
At JustAnswer, we wanted to get back to data-driven decision making after investing over a year in a big project that fell far short of our expectations. In 2014, every product team had a goal to present one validated learning per week at a company-wide meeting. We defined a validated learning as an experiment with a falsifiable hypothesis. This goal applied to teams working on established as well as new products, so it gave us a way to track the pace of experimentation by product teams working on all three horizons.
We finished 2014 with most teams presenting a validated learning each week.
Within my team, we used this goal to help us make decisions about our work. I liked that we usually chose options that took less time and got us results more quickly, resulting in over 40 experiments completed in 2014. Most of those experiments tested potential solutions. A way in which I’d like us to improve is to put more value on gaining customer empathy and testing assumptions about customer problems. Most importantly, our team helped lead the way to JustAnswer becoming a more data-driven company going into 2015.