5 Steps To Consider Before Launching A Feature

Aatir Abdul Rauf

By 

Aatir Abdul Rauf

Published 

Sep 26, 2022

5 Steps To Consider Before Launching A Feature

Q: "I just launched a feature & customers started complaining. Should I persist with it or just roll back?"

Consider this:

When you're ill & your doctor prescribes a medicine, they may sometimes say "symptoms will get worse before they get better."

They also predict recovery in X days & the way to measure that is to take account of your "fever" regularly.

BUT if symptoms persists, they ask you to come back for a visit.

This kind of medical treatment also applies to tech products.

Look. Customers use your product to derive value from it but apart from the product price tag, they also incur a learning curve to learn how to use it.

When they finally overcome that, they settle down in their ways. If you throw a curve ball in there & force them to re-learn, they will make some noise. It's natural.

But...

What if your update was really an oversight? Something truly counter-productive? Sticking with it for too long can be damaging too.

Here are some steps to consider:

1. Set a hypothesis before you launch a feature

The hypothesis typically should suggest:

(a) how much the value metric will rise/drop (i.e. the fever) and

(b) in what timeframe

2. Know the benchmark adoption of your feature on a daily, weekly & monthly basis

How often is it used & by what % of your user base.

3. Create avenues to collect feedback from your user base for this specific feature

4. Consider your options

Once a significant majority of your adoption has used your feature & value metrics continue to diverge from 1, you need to assess your options.

5. Before you call in code red, review the user feedback & ensure there aren't blocking bugs involved

Also, explore possibilities to improve the feature in a quick iteration before surrendering to a revert.

Example:

At Pakwheels, we once rolled out an update where we changed a single-page form into a three-part wizard in an attempt to bump up conversion rates. I got a LOT of hate for that change & our power base pressured us to roll back. And I did - 3 weeks later.

But I was naive in those days & might have done things differently given the chance. I would have:

1. Focused more on the problem.

Was the form length really an issue? User interviews maybe?

2. Documented our pre-release trends to assess our decline objectively.

How bad were we suffering?

3. Conducted a focus group of sorts to get early feedback.

4. Crafted a better messaging/onboarding layer to explain the change a bit better.

5. Potentially designed an A/B test for this.

The challenge with an A/B test though is that it:

  • Becomes expensive when variants have a significant departure from the control
  • Only works when you have abundant data points
  • Really tells you something when you achieve 95% statistical confidence (i.e. your variant irrefutably annihilates the control or vice versa)

Due to these reasons, most A/B tests turn out to be inconclusive.

Develop only to solve a problem. Revert only with reason.

Subscribe to Aatir's Newsletter

Weekly Product Management & Marketing Insights in your inbox

Behind Product Lines

The unfiltered truth about the wonders & perils of product management marketing & growth in practice.

Related Posts