- The Product-Led Geek
- Posts
- đ We're going on a bear hunt...
đ We're going on a bear hunt...
An essay on data and decisions
This post is presented by Amplitude. Download this free worksheet to map out your own PLG strategy.
This post is presented by Zeda.io: Spending days on analyzing the true voice of the customer? With Zeda.io, build products that people love by listening to thousands, in minutes. Get actionable product insights on what to build next, with smart AI summaries and glimpses into their revenue impact - all in one place.
Product person at a startup: âHow can we make good product decisions without data?â
You need data to make good decisions.
A decision without data is just a guess.
Youâre probably familiar with the DIKW pyramid.
Data. Information. Knowledge. Wisdom.
I like to invert the DIKW pyramid and think about decision making as a funnel.
We feed data in and a decision comes out at the other end.
But thereâs a lot happening in the middle of all of that.
Data is the top of our decision making funnel. Itâs the raw input. The atomic signals.
We canât do much with the data on its own.
So we organise that data, structure it, classify it, and create relationships between different pieces, so it becomes useful information with context.
Information typically allows us to answer the âwho?â, âwhat?â, âwhere?â and âwhen?â questions.
We analyse that information, perform comparisons, synthesise ideas, form concepts and interpretations to build knowledge with meaning.
With knowledge, we can start to answer questions like âhow?â and âwhy?â.
And we evaluate the knowledge we have about something to generate insights that lead to wisdom with understanding.
Itâs wisdom that enables us to make timely, well-informed decisions that improve the likelihood of more predictable outcomes.
Of course we do all of this without conscious awareness that our brains are applying this framework.
And itâs not a linear siloed process for each decision either.
Data builds up over time, and weâre gradually converting it to information, knowledge and eventually, wisdom.
And none of it is binary. Itâs not that weâre ever completely wise or unwise about something.
We just have either increased or decreased confidence in making decisions.
But all of this underpins every product and growth decision we make.
An example
Hereâs a simple example from a B2B sales commission product.
Data: We have raw data including behavioural analytics, error logs, firmographic data, support case data, and customer feedback including transcripts of user interviews. In isolation, each data point provides a siloed signal.
Information: We structure and organise the data. Itâs still in different tools, but we can start to manually correlate support cases, error logs, usage analytics and customer feedback for any given customer. We also start to observe patterns in specific signal sources. Error logs highlight some issues with syncing data from the CRM. Customer feedback consistently notes frustration about a feature for predictive payout calculations. Analytics data shows that the feature is used less frequently than expected.
Knowledge: We synthesise and analyse this information and learn that the sync error only affects companies with more than 500 CRM opportunities, and only those using SFDC. But itâs a silent, hidden issue. Opportunities are being syncâd but with some lossiness. Negative feedback and reduced usage of the payout prediction feature tells us that itâs not meeting user needs. User interviews further reveal that reps find the feature confusing and amongst teams on the free plan, itâs cited as a reason for not upgrading to the paid plan.
Wisdom: We evaluate the knowledge we have and learn that the two issues are related. We gain the insight that the dissatisfaction is only seen amongst customers experiencing the (silent) sync error. Itâs creating inconsistencies in the predictive payout UX that is confusing users. This segment of customers has significant overlap with our ICP.
Weâre can now more confidently decide how to approach the problems we first observed in the raw data.
Where we run into problems
The problems arise when we accept insufficient data, or when we are lazy in analysing and processing the data.
In either case we need to make more, or larger assumptions across the decision making funnel.
With more data available, you can arrive at a position of confidence in your decision making more quickly.
With less data available we need to fill the gaps with assumptions.
If our analysis is not thorough, weâll miss connections, correlations and patterns that again leave gaps in the information and knowledge we have, and lead us to need to create assumptions to fill those gaps.
The more assumptions we have, or the bigger the gaps we fill with assumptions, the less confident we can be in our understanding of a given situation, and the less confident we can be in the decisions we make based on that understanding.
But trying to chase absolute confidence is like chasing your tail.
Note: The same is true in formal experimentation too of course - we accept some margin of potential error through sub 100% confidence level so that we donât have to run our experiments in perpetuity.
If we think about a new area where we have no existing data, we can plot this as an S-Curve:
The curve is slow to begin with because thereâs a critical mass of learning we need to do to get beyond the âguessingâ stage.
Then itâs a fairly linear ramp before we start to see diminishing returns.
The good news is, as we build a body of data, information, knowledge and wisdom in a given domain, we move up the curve, and subsequent similar decisions can be made more quickly AND with higher confidence.
Bad data in, bad decisions out
We can also run into problems when we have bad data.
If we feed our decision making process with bad data, weâre much more likely to make bad decisions that lead to bad (or at least indifferent) outcomes.
Thatâs why we need to take care to avoid bias in our research, and implement steps such as ensuring our analytics instrumentation is implemented in conformance with the schema we design.
The consequence of bad data in product and growth decisions can be huge.
The bear hunt
Going back to the question that inspired this post.
Product person at a startup: âHow can we make good product decisions without data?â
Me: âYou have no data? Really?â
Product person at a startup: âWeâre early. We donât have many users yet.â
Me: âSo?â
Letâs be clear.
There is never a situation where you have no data.
In product and growth, the word âdataâ is most commonly equated with quantitative product analytics data.
But âdataâ really just means a body of facts or information that we can apply in reasoning, discussion and decision making.
Beyond quantitative product analytics and A/B test data, there are many sources of data that we can tap into to fuel our decisions.
Founder & sales conversations with potential customers
User interviews
Observational studies
Product feedback or RFEs
Support cases
Competitive analysis
Surveys
Social media
Win/loss analysis
Youâll always be able to tell yourself an excuse about why you canât get data.
Youâll always be able to find an obstacle that justifies inaction.
But hereâs the thing.
Product and growth operators need to be experts in acknowledging the obstacle, but then finding ways over it, under it, or through it.
This is the bear hunt.
At the same time, product and growth isnât just a science.
Itâs equal part art.
Itâs reasoning.
Itâs intuiting.
Itâs taking bets.
The less confidence we have, the bigger the bet.
And sometimes thatâs exactly whatâs needed.
The sweet spot moves along the curve.
You can keep going on the bear hunt, but creating wisdom and increasing confidence inevitably has the tradeoff of taking more time.
Balance confidence with decision making agility.
A big part of the art is in knowing for any context, where the sweet spot lies.
Until next time!
Reply