I spent five years as a web analyst before I became a product owner at the same company. That transition changed how I use data more than any course or certification ever could — because I saw, from the inside, how differently the two roles actually consume analytics.
An analyst’s job is to explain what happened. A product owner’s job is to decide what to do next. Those sound similar. They’re not.
The reporting trap
Most product teams inherit an analytics setup built by an analyst or a marketing team — which means it’s optimised for reporting, not decision-making. You get dashboards full of sessions, bounce rates, goal completions, and traffic sources. You get weekly summaries that tell you whether numbers went up or down. You get comparisons to the same period last year.
All of that is useful. None of it is enough.
The reporting trap is treating analytics as a health check rather than a question-answering engine. If you only look at your data when something breaks, or when someone asks for a number, you’re using it as a rear-view mirror. Product owners need a windshield.
The three things analytics actually tells you
Digital analytics tells you three things that matter for product decisions:
- Where users drop off. Not why — that’s a different question, usually answered by user research. But where the funnel breaks. Which page, which step, which interaction has an unusually high exit rate. This is your shortlist of hypotheses.
- What users actually do vs. what you expected them to do. Every product decision involves an assumption about behaviour. Analytics either confirms or disproves that assumption. A feature built for X is being used primarily for Y. A flow designed for linear navigation is being navigated backwards. The data doesn’t explain it — but it shows you where to look.
- Whether something changed after you shipped. This is analytics’ most important function for a PO. You shipped a change. Did the metric you were trying to move actually move? By how much? Did anything else move that you didn’t intend?
Everything else — traffic trends, device breakdowns, source/medium reports — is context. Useful context, but not a decision-making input on its own.
The vanity metric problem
Vanity metrics are numbers that look good in a slide but don’t connect to any decision you’d actually make. Total page views. Social media followers. Registered users (without active user data). Number of features shipped.
The test I use: if this number went up by 50%, what would I do differently? If the answer is nothing — if the number doesn’t change your priorities, your roadmap, or your next sprint — it’s a vanity metric. Track it if you must. Don’t report it as a success.
When I moved from analyst to PO, I inherited a dashboard full of them. Rewriting it was one of the first things I did, because the team was celebrating numbers that had no bearing on whether we were building the right things. We replaced the dashboard with five metrics tied directly to the business outcomes we were accountable for. The first time we presented it to the senior team, the meeting was shorter and the questions were sharper. That’s how you know you have the right metrics.
Behavioural data vs. business data
One of the distinctions that took me a while to fully internalise: behavioural data tells you what users do. Business data tells you whether it matters.
A user who completes a search but doesn’t book is a behavioural signal. A user who starts a checkout flow and abandons at the payment step is a different behavioural signal. But neither of those tells you what the business needs without business context: is the drop-off worse than six months ago? Worse than a comparable segment? Worse than you’d expect for that device type?
Analytics without business context is trivia. The product owner’s job is to hold both simultaneously — to see the behavioural pattern and know whether it’s a problem worth solving given where the business is.
Where to start if your analytics isn’t working for you
If you’re a PO who feels like your data isn’t helping you make decisions, the problem is usually one of three things:
Your tracking doesn’t match your product decisions. You’re measuring what was easy to instrument, not what you actually need to know. Fix: write down your three most important product questions, then check whether your current tracking can answer them. If not, that’s your instrumentation backlog.
Your data lives in too many places. Paid, organic, on-site, app — each channel reporting in isolation, no single view of what’s happening. This is an infrastructure problem, not an analytics problem. The solution is usually a consistent event taxonomy and a properly structured dataLayer.
You’re reading dashboards instead of asking questions. Open your analytics tool with a question in mind before you look at the numbers. ‘Did the change we shipped last week move the metric we were targeting?’ is a better starting point than scanning forty charts to see what catches your attention.
Digital analytics doesn’t tell you what to build. It tells you whether what you built is working — and where to look for what to fix. That’s enough to be invaluable, if you use it that way.
