Concierge vs. Wizard of Oz Test

There are two lean startup methods that are often confused with one another. As a result, they’re often misused: Concierge test and Wizard of Oz test

wizard of oz test vs concierge test

tl;dr: Wizard of Oz is for testing a specific solution hypothesis, Concierge is for generating ideas. Use each for the right reasons. You can read more by downloading the Real Startup Book which is being published in two week iterations.

icon - download sumome

Wizard of Oz Test

Wizard of Oz test - Pay no attention to the man behind the curtain

In the lean startup world, the Wizard of Oz method was used by companies like Aardvark and CardMunch. Both were able to create amazing effective prototypes very quickly by not building anything.

Aardvark connected people with questions to people with expertise. Cardmunch managed to transcribe blurry photos of business cards better than any other Optical Character Recognition (OCR) system at the time. Neither had any technology or algorithm behind them.

The technology was human beings.

Cardmunch leveraged Amazon’s Mechanical Turk while Aardvark used interns. Instead of building complex algorithms both just faked it. Since neither needed a real time response to provide their core value propositions, there was no need to build anything.

To the user, the products delivered exactly what they promised. A little slow perhaps…but the value was there.

(Note: Lean Startup did not invent this technique. It was already being used in the lab by 1975 by J. F. Kelley.)

Concierge Test

Concierge test bellConcierge testing was used by companies such as Wealthfront and Food on the Table. The approach is similar in that there is no technology involved. Both companies performed the service they were offering as a concierge at a hotel might. Purely manually.

Wealthfront would work with its customers to provide wealth management advice and portfolio management. They would sit with them, pen and paper in hand, and figure out solutions to wealth management issues step by step.

Manuel Russo used to personally go shopping with his customers to help them plan their meals. (Not very scalable!)

So both Wizard of Oz and Concierge testing provide the value proposition manually without any technology. What’s the difference?

User Experience

The difference is that in one case, the user knows about the concierge, while the Wizard of Oz is hidden from the user. This is a key difference for the user experience.

In a Wizard of Oz test, the user experience is a complete simulation of the actual intended product, but with a slight time delay since a human cannot react as fast as a computer. The delivered value proposition is almost identical but slightly under performing.

In a Concierge test, the value proposition is actually significantly higher than the eventual product. That’s because the customer is aware and is interacting with a real human being.

Humans are Social Creatures

humans are social creaturesHuman interaction, as a whole, is generally a positive value proposition. We’re social creatures and even introverts like me gather in groups to socialize.

There are some circumstances where we do not want to interact with a human being. For me, I don’t want to talk to anyone at Macy’s. Malls and department stores just kind of freak me out in general.

For everyone else, just imagine that you just typed a search into WebMD to do a little private research and all of a sudden a window pops up with a live human being on a webcam to tell you all about your search query for genital herpes. Fun!

Compare that to Food on the Table where we’d get to go shopping with Manuel Russo. Normally, I find shopping a bit of a chore, but Manuel is a pretty fun guy. I was delighted to meet him a few years ago in Austin and I would be equally delighted to go shopping with him. (Manuel seems to know all the best places to get food in the city and is hysterically funny.)

A Concierge test has an extra value proposition that a Wizard of Oz test does not. Or at the very least, the test is severely affected by the visible presence of a human being.

Generative vs. Evaluative

This brings us back to the distinction of Generative Research vs. Evaluative Experiments.

Generative Research vs. Evaluative Experiments

Wizard of Oz testing allow us to test a clearly defined hypothesis about what solution will provide the value proposition to the customer. We test it by simulating the user experience as precisely as possible without technology. The lack of technology means we can rapidly test a variety of hypothesis and rapidly iterate.

Since Wizard of Oz allows us to test and falsify a solution hypothesis, it’s an Evaluative Product Experiment.

Concierge testing has a huge human factor which biases the test. If we attempt to use a Concierge test to validate our solution hypothesis, we’re liable to get a bias towards a false positive or perhaps even to a false negative in some circumstances. It’s simply the wrong type of test to falsify a clear hypothesis.

Concierge testing is very bad when we have a clear idea of the solution, but very good when we’re not exactly sure what the solution should be!

Generating Ideas

generate ideasWhen Wealthfront sat down with their customers, they had some idea on how to manage finances, but they didn’t have a step 1, 2, 3 algorithm of how to do it.

There were many unknowns including how to best represent the portfolio information to the customer in an intelligible way to them to make their own decisions.

Concierge testing allows us to work with the customer without having a clear solution hypothesis. A vague idea, or even no idea, will sometimes be enough to muddle through the process while talking with the customer and getting constant feedback.

We might think that it’s impossible to go to a customer and just sell the promise of a solution without having any clear idea of how to provide it. It’s not.

Selling solutions that don’t exist and then winging it is common place. It’s called Consulting.

(Note: Lean startup didn’t come up with Concierge testing either. Consulting has been around since the before the pharaohs.)

consultant to the pharoahs - a.k.a concierge test

By the time we’ve manually performed the task a few times for our consulting clients, we have a much clearer idea of what form an automated solution can take and what the minimum viable feature set looks like.

The solution is not yet validated, but we can take that hypothesis and begin the process of evaluating experimentation with a Wizard of Oz test, paper prototyping, usability tests, etc.

Lessons Learned

For more ways to test your solution or figure out your minimum viable feature set, download the Real Startup Book which is being co-created by 40+ lean startup enthusiasts:

icon - download sumome

So…what should I post next? Tweet to tell me what to write:

Show me how to test product market fit!

or

How can I do lean startup in my friggin' huge company?

16 comments

  1. Fantastic comparison and contrast of the concierge versus “Wizard of Oz” MVP. As you noted, the concierge MVP is great for generating ideas in the face of uncertainty about the solution. I would go a step further: as a qualitative research tool, it’s also great for better understanding market problems and for uncovering “new” ones.

    • Tristan says:

      Hmm…feature level problems?

      In terms of big, main value propositions problems I think we need to have that down before we can attempt concierge testing. Once you have that, narrowing down to specific issues we might be able to turn in unique selling points or differentiators…hell yes.

      • I think it’s a mistake to assume you “have the unique value proposition down” or the right set of market problems “down” before running a concierge experiment. You may have conducted prior customer discovery and experiments that give you some level of confidence in those hypotheses, but they likely aren’t proven or complete.

        And, in fact, a concierge experiment is one of the most powerful ways to immerse yourself in the customer and the problem space, and thus uncover unknown unknowns. As valuable as I think interviews are, nothing beats hands-on, active engagement.

        • Tristan says:

          I’m pretty sure we just agreed.

          Must have a general/main pain point before running concierge. Then running concierge can narrow down the potential solutions and help generate specific solution hypotheses which are then the UVPs.

          Can’t run concierge without having some idea of the general pain point, let alone convince someone to give you the chance of solving it.

          • While we agree that you’ll want to have some hypotheses about pain points going into a concierge experiment, we differ (at least in emphasis) in two ways:

            1. It’s NOT just about discovering solutions, it’s arguably more about discovering problems.
            2. To support your contention that a concierge experiment’s utility is in discovering solutions (not problems), you stated that you need to have the unique value proposition “down”. Yet The problems you discover during a concierge experiment may in fact be transformative and cause a major pivot, not just a tweak to “specific issues ” and granular differentiators.

            These are key points. Too often, people think they’ve “validated” their pain point and unique value proposition hypotheses, when in fact they haven’t fully explored the unknown unknowns. A concierge experiment is an excellent way to take that exploration to a level beyond prospect interviews.

          • Tristan says:

            This is strange, can you email me where you think it says that I think the UVP should be absolutely locked down and not changed? If it’s in the post that should be corrected.

            I only think you should have a customer segment, a pain point, a general value prop, and some demonstration of demand, not a UVP.

            e.g. In Wealthfront’s case, I would want to have:
            – a clear customer segment
            – evidence that the customer segment has a problem managing their wealth
            – the value proposition for that segment that will solve that general pain point
            – evidence that the customer will “pay” for that service by committing money and/or time to do the concierge test

            Of course, it could turn out that everything we thought going into the test is wrong and we wind up going straight back to customer discovery. But that’s the case at any stage of testing with any type of test.

            We could also discover or refine an interesting UVP when running the concierge test. e.g. “Wow…people don’t have a problem coming up with recipes for everything. They’re fine figuring out the main course, but they have a lot of frustration coming up with different side dishes.”

            I don’t think it’s possible to run a concierge test without having some basic idea of what the pain point is. I don’t understand how you could possible do a concierge without knowing at least what pain point you’re attempting to solve.

            If just hanging out and observing until finding a pain point, it wouldn’t be a concierge test. It would be ethnography.

          • There are several disconnects here, Tristan.

            First, I hope you acknowledge that your blog entry repeatedly referred to using the concierge test to explore the solution space, and nowhere did it mention using it to explore the problem space. My original comment merely pointed out that a concierge MVP can work equally well – if not better – for better understanding market problems and uncovering previously undiscovered ones. You should at least admit that your original treatment of the concierge MVP was almost exclusively focused on the solution, and that this near exclusive focus omits an important role that a concierge MVP can play in problem discovery.

            Second, you asked, “can you email me where you think it says that I think the UVP should be absolutely locked down and not changed?” I didn’t make that strong a claim. In a comment, you wrote “In terms of big, main value propositions problems I think we need to have that down before we can attempt concierge testing.” That statement (and particularly the word “down”) implies more than having a hypothesis regarding “main value propositions problems” as a prerequisite for a concierge test. When you combine it with your blog entry’s intense focus on using the concierge test to explore solution space, it implies either a great deal of confidence in the problem space, a belief that a concierge test is not notably useful for testing and exploring the problem space, or both.

            Third, you seem to believe in a strange distinction between a “value proposition” and a “unique value proposition”. You seem to think the UVP comes from the solution, so you come into solution exploration with a (non-unique) value proposition and form a unique one as you solidify a solution. Please correct me and explain the distinction as you see it. (BTW, it’s another topic, but I don’t believe the uniqueness of a compelling UVP should be rooted in the solution. It should be rooted in the problems.)

            Fourth, you retreated from your original statement that you should go into a concierge MVP having the problems you wish to solve “down” – which implies a great deal of confidence – versus going into a concierge MVP with “some basic idea of the pain point”. Nowhere did I dispute that you should go into a concierge MVP with “some basic idea of the pain point”. I agree with this softer formulation. (Moreover, despite what some design thinking proponents may seem to claim, even ethnography can be very useful when you already have some hypotheses regarding the problem.)

            Finally, you implied that a concierge MVP is no better a tool than any other for discovering problems that might cause a major shift in the unique value proposition: “But that’s the case at any stage of testing with any type of test.” Again, I see a concierge MVP as an excellent way of uncovering unknowns unknowns. Many tests – since they are quantitative or “evaluative” in nature – are not nearly as effective for uncovering market problems.

      • Natalia says:

        Tristan, thank you for you post! A lot of what you say is so true. Would you mind if I translate your post into Russian and post it somewhere else, with a reference, of course ?

        • Tristan says:

          Please feel free!

  2. Hey Tristan, I just stumbled onto your blog. Great post! I recently took over organizing duties for the Boston Lean Startup Circle, so I was looking for content to share.

    My team and I most recently used the Wizard of Oz test 3 months ago to validate assumptions for our latest venture, now at 30 clients and 2-3K in monthly revenue.

    Looking forward to future posts!

    • Tristan says:

      Thanks Dennis! I appreciate it. Good luck with Boston LSC…haven’t been over there yet.

  3. Jinder says:

    Great post, thoroughly enjoyed and agreed reading that. Had to recently explain to a number of colleagues this very difference and decided it easier to point them here, thanks!

    • Tristan says:

      Thank you very much! Most of the time when I write these articles it’s so I can also send people a link instead of explaining it. 🙂

  4. Pingback: Coaching | NO BIZ LIKE SHOWBIZ

  5. Pingback: The definitive guide to Launch a mobile App in 30 days | Advanced Growth Hacking Program

  6. Pingback: Pre-Launch Strategy for Mobile Apps, The Definitive Guide [In 30 Days]

Leave a Comment

Your email address will not be published. Required fields are marked *