Founders of a scale-up technology business run the risk of building a product that no-one needs, nor wants, if they overlook the importance of understanding and capturing user needs.
Accept that, in the beginning, you know the least. Identify what it is that you need to know, or don’t know, and then go and find out.
Nurturing a user-focused culture will enable you to create a successful MVP, and then scale it well, prioritising the features that your users want and will pay for.
Here are some tips from Founders of South West tech scale-up businesses, taken from the 2017 series of Threads meetups, helping you to make the best use of product management.
A product is anything that provides value. It may be tangible or intangible, and this may include services or people resource.
Before someone can justify switching to your product, they have to want to fire the incumbent. To counter this inertia, aim to make a product that’s 10 times better!
Jock says: Users always have a choice. They can choose to solve their problem a different way or not to bother solving the problem at all. Whatever problem your product solves, it has to be as frictionless as possible – including the effort for the user of changing their habits to use your product in the first place. And this absolutely has to be less effort for the user than simply putting up with the status quo.
Most products differ substantially from their original guise; it’s easy to forget this, due to survivors bias! Hindsight is one way to learn and evolve, but first, you’ve got to earn it!
Jock says: Hindsight is a reflection of being better informed. Similarly, business risk is a measure of uncertainty – the less you know for sure, the greater the risk. Reduce your uncertainty and risk through research, experimentation and learning.
- Q&A: How do I distil users’ wishlist of requirements into a core handful
- Find the tipping point in your research
There is very little that’s truly new. Accept that you are just making things better, MUCH better. It’s better to do one thing, or a small number of things, exceptionally well, than to spread your product too thinly.
You will see benefit from taking a servant leadership approach. Your Product Manager doesn’t call all the shots, their role is to clarify and evidence the goals and to get a group of experts to align and deliver, all on the same page.
In defining success, ask of yourself;
- Do I really understand why users buy my product? Separate the demand (volume) from the need (why). The best way to do this is to go and talk with your users to see the context within which they’re working. Understand their problems, drivers and the implications. Watch for other underlying factors or inefficiencies that you may want to solve in the future.
- What does success look like for your end user? What characterises success, first for MVP and then for full launch? If the initial launch wasn’t successful, what should be changed to address this as priority’?
Early product adopters are invaluable but typically have differing needs from the major market. They may help shape the product but shouldn’t define it entirely. Their own needs will also evolve, grow and mature.
Expose your product to users as early as possible and throughout its development life cycle.
Define what success looks like for the different user groups, including your own goals, and how well these are achieved. Ensure you’ve understood the goals correctly.
Features are merely a by-product of helping someone.
Jock says: Remember the example of the petrol gauge in the discussion. The user problem is that he or she doesn’t want to run out of petrol. You can solve their problem in many different ways, but without understanding the user’s context, and what will work for them and what won’t, it’s difficult to come up with the best solution for their needs. Once you truly understand the user and their problem, the solution will suggest itself, and the features you actually build will come out of that. That’s why you don’t start with features. You start with user goals.
Gathering feedback is best done face-to-face, or via video or screen share. A nice indication that you’re getting something right, is users saying “oh my god I didn’t know I could do that, this will help me loads”! You must absolutely nail this, aka the ‘product:market fit’. Work towards these ‘OMG moments’ as defacto standard.
Fake doors and telemetry can be cost-effective for testing a new concept, measuring clicks and patterns. This shows what’s happening, but never _why_ it’s happening. You need to verify both the qualitative and the quantitative perspectives – each presents questions to the other.
Jock says: Quant analytics may tell us that users are dropping out of our shopping cart process at a certain stage. But without going and speaking to them (qual analysis), we have no idea why.
Once we have a better idea of why, we can create an experiment to test possible ways to improve the drop-out rate. So we might reckon that making the ‘Buy’ button more prominent might help, then we run an A/B test (another quant bit of analysis). We find out that a more prominent button improves the conversion rate for desktop users, but not mobile ones. We need to find out why, so we do more qual analysis (talking to users). And so on.
When selecting users from which to seek product feedback, work with the users to whom your product matters the most. Take a balanced view incorporating feedback from your most vocal users too, noting that they are typically either delighted or disappointed.
Jock says: Also, if your resource is limited – say, if you’re a startup – consider prioritising the group of users that you’ll be able to get hold of to speak to most easily.
Work to find the evidence from your users as to the problems that they need to solve.
It can be difficult balancing the need to deliver new features that are visible, with the internal housekeeping, maintenance and technical debt management. Accept that ‘a good time’ to resolve housekeeping issues never fully arrives, so chip away constantly.
Every product release is instantly legacy. Everything that you can clean up makes your engineering team quicker, so they can develop a better product and ship it faster.
Also look for ways of improving your own business too, or freeing extra capacity. You can apply business analysis tools and methods to reduce tomorrow’s legacy. Improvements may, or may not, be external facing. (Your support teams/developers / etc. are also users of your product – internal ones.) Speed of experimentation, learning and delivery should be part of the criteria for success.
Jock says: Regular organisation-wide retrospectives can be a way to identify improvements to how the business operates – and how pleasant a place it is to work at. Treat each action from the retrospective as a measurable experiment. “If we try this, then we should see these specific results by this point in time.”
Jock says: Technical debt always accrues, so the best approach (if you can’t avoid it, to begin with) is to tackle it little and often. Devote a day a week each week for squashing pre-existing bugs and dealing with technical debt. Sometimes technical debt is more chunky, such as needing to refactor a major bit of the product, in which case have a ‘firebreak’ – a week or four – devoted purely to reworking existing code rather than creating new stuff.
Working to three-year refresh cycles is a sensible time frame and pace within which to manage your products; you should effectively rebuild your product every few years given the pace of technological change.
The best person to manage user engagement is someone with both the temperament and the rigour to ask informed questions and solicit unbiased feedback, and then to analyse the data, synthesising this into pointers towards solutions. Encourage engineers and the rest of the team, to observe user needs frequently.
You are not asking your users if they ‘like’ the product, you’re asking them what problem it solves.
Promote a culture that is open about the business goals and also hurdles or short fallings. Be open and transparent or you will suffocate ideas about how to address these.
Do not punish mistakes. Failure of an experiment is just another learning point. You will rarely never get something perfect from the off.
Keep the conversation going with your staff about the problems users face, ideas which may help and potential solutions.
Post up your product backlog, roadmap, user research findings and designs on available wall space. Invite discussion, encourage staff feedback. Plaster the office walls with this – kitchen, coffee machine or water point – mirrored on Trello (or equivalent) systematically. Spark conversation among a diverse range of colleagues.
To measure product’s success, understand what it is that you expect to change, and by how much. Define a hypothesis, construct the experiment and measure effectively, with an appropriate scale.
It is better to measure outcomes (qualitative; the real-life difference that the product makes) than outputs (quantitative, the volume by which the product is used, or how many things you make).
You must understand outcomes in real-world terms; new customer sign-on time is quicker, releases can happen more quickly, or website sales are converting. How well is the user able to achieve their desired goal successfully?
Avoid ‘vanity metrics’ that have little bearing, even if these seem to satisfy the board or investors; e.g. clicks, page views, referrals from Twitter.
Jock says: A good metric is one that prompts you to remedy it and you know which specific action will directly cause the numbers to go up or down. A vanity metric doesn’t have any obvious remedial action because you’re not in control of its causes.
e.g. Pageviews is a vanity metric because it may go up or down due to the time of day, whether search engine algorithms happen to favour your content that day, or whether people are searching for those particular keywords – none of which is in your direct control. Moreover, page views usually have no correlation with the success or failure of your product, so why bother counting them? Croll and Yoskowitz’s Lean Analytics is a recommended read for measuring the right things.
An example of a better metric might be to track e.g. the number of commonly-purchased products displaying as ‘out of stock’, as this will most likely have a direct correlation to potential customers dropping off your site and failing to convert. You can take remedial action by ensuring that you know which items are most commonly purchased, and to replenish stock more quickly before it runs out.
Jock says, Express sprint lists and project targets in real-world terms that are focused on human outcomes.
- At the end of this sprint, users with impaired vision will be able to navigate our website at least twice as quickly
- At the end of this sprint, our internal support teams will be able to see a customer’s complete support history when they call, which will reduce the time a customer needs to be placed on hold by 80% and should increase customer satisfaction ratings of support by at least 3%.
- At the end of this sprint, Android users with a fingerprint sensor will be able to use it to unlock the app instead of entering a passcode so we should see a 50% reduction in app login times for that user segment.
- At the end of this sprint, teams will be able to deploy new code to staging and production in under 5 minutes, including all required tests.
Having access to measured data can be handy evidence in answering Freedom of Information request – an FOI request is usually someone asking for specific factual information from a government body, so having access to relevant measured data is usually helpful, rather than having to research it from scratch.
What is a minimal viable product (‘MVP’)? Simply, it’s an experiment from which to learn, delivered as a small collection of features that solve a set of problems for a group of people.
An MVP must solve at least one of the problems that it addresses completely, allowing for the scale and complexity of the problem.
You should be able to sell a MVP for money (reasonably). It is absolutely ok to charge users while learning from them at the same time.
A ‘wizard of oz’ approach is ok, whereby the mechanism is invisible behind the scenes, the user doesn’t care, so long as the need is met.
Jock says: Google ‘concierge MVPs’
If you go too broad and try to solve all the problems you risk spreading your product too thinly. Do one thing really well; the one thing that (a small group of) users really want!
Early adopters will want to experience your product claims in the real world, and for it to be measurable. Don’t bullshit your users.
As a newcomer, building a sense of trust can seem a Catch-22. Experiment to find the least acceptable risk for your product users, demonstrate your product’s capability, then ramp it up. Accentuate efforts on establishing trust; e.g. accreditations, referrals, standards, associations or partnerships with established brands.
Your competition will always be there and, even if you are enjoying low competition, your users will always eventually go another route. Isolate and solve their priority problems, and make the complicated look simple.
Don’t be afraid to fake it while you make it.
Find people – colleagues, users – who can challenge your status quo, and who can poke holes in your approach and product, and apply objectivity while you improve.
We anticipate holding a follow-on discussion with Jock in early 2019.
– Crolls and Yoskowitz – Lean Analytics
– C Todd Lombardo – Design Sprints
– Jake Knapp – Sprint
Threads meetups are a way for founders and department heads of technology companies to share learning, experiences and conundrums. These roundtable discussions unpack topics around leadership, business and operations. Most people find at least one improvement to take away and implement.
Threads is held at 6:00 – 8.30pm on the first Wednesday of each month. To RSVP, head to the Threads South West meetup page.
For more Founder’s scaleup tips, follow Threads on twitter.
Keep an eye out for more Threads guest blogs coming to TechSPARK soon.