Abstract
Building great front-end experiences is just the beginning — real success comes from understanding how users interact with them. In this talk, we’ll explore how front-end analytics with Amplitude can unlock smarter product decisions, faster debugging, and real growth. You’ll learn how to set up meaningful event tracking, run experiments, and use real user behavior data to prioritize features, fix bottlenecks, and drive engineering efficiency. Through practical examples and actionable strategies, we’ll move beyond basic dashboards into a world where every click, scroll, and hesitation tells a story. Whether you’re a developer, product manager, or tech lead, you’ll leave with a clear playbook for turning Amplitude data into real-world impact — creating products that are not just used, but truly loved.
Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
My name is Roger Dham.
I'm currently working as a staff engineer at N one.
Today we are diving into how frontend analytics can transform the way we build,
debug, and improve our web applications.
Whether you're in product design, engineering, or growth.
This talk is about enabling smarter decisions based on real user data.
So what does frontend analytics mean?
Frontend analytics is like putting on a pair of glasses and seeing
the app through the user eyes.
It tells us what people actually do on our site, where they click, how far
they scroll, how long they stay, or even where they rage click out of frustration.
Imagine watching someone try to sign up, but they get stuck on field.
That insight can make the difference between a user bouncing or converting.
So why does.
Frontend analytics matters.
So the reality is most users never report bugs or a confusion.
They just leave the website or web publication analytics help
us catch those silent failures.
It also help us test ideas quickly.
Let's say that a team launches a new pricing page with analytics.
We can see instantly whether it improves signups or heart them.
So we don't need to wait for our gut feelings or customer complaints.
So the analytics will give us the fast and the real reliable feedback.
So who benefits out of these frontend analytics?
So this is not just for data folks, this.
Help us pretty much everyone in the team.
So it could be product, engineering, growth, or design.
So in case of product teams, they get prioritized features that actually matter.
By analyzing the funnels or by looking at the user data, engineers can
debug based on the real usage instead of guessing and then debug issues
and see where the actual drop is.
The analytics to run the experiments AB test and optimize the flows.
And then again, analysis.
The analysis of the existing funnels.
Designers can actually validate their layouts, ui ux designs, and
then see the user behavior choices.
It creates a shared understanding across teams.
So what actually we can track with this.
So we are not just tracking clicks, right?
So here, think about capturing your user intent.
So for example, someone searching for a product, starting a
signup or overing on help icon.
We can track time spent reading a feature, or how many people scroll
all the way down to the homepage, or how many people actually figuring out
things, how much time they're spending to do the action they're supposed to do.
So pretty much everything can be tracked and analyzed.
The point is to understand.
What the user is trying to do.
Not just the, not just what they're clicking and what
they're trying to accomplish.
Those.
So those are the things we try to track and get out of the analytics.
So what are the types of analytics?
So there are different flavors of analytics, but majorly
there are like four types.
There are like behavioral, technical, experimental, and then session replace.
So what generally each tele about is basically behavioral.
Tells you what users did clicked, dropped off or stayed.
Technical shows things like low time performance, JavaScript errors,
or any errors dropping the user or like user not able to proceed.
Whereas experimental is more of a b testing and then figuring out which
is, which variant is performing better versus other, or what are the.
Difference between the user acceptance or like user interactive
in terms of different variations.
And whereas session replace, they are like watching video of the user screen,
which is basically cool for debugging, like for UX reviews, for for everything
for everything to understand how user is interacting with our web application.
So what are the popular tools available for different end analytics?
There are a lot of tools out there, so I'm not gonna go each and every
tool, but I just pick the most popular tools available right there.
There are like amplitude mix panel or post hack century, and there are.
Bunch.
There are a bunch of more tools available, but I'm not gonna, as I
said, I'm not gonna go over into them.
But I for this session purpose, I just took Amplitude as the example,
and then we'll be focusing on the the ways, how can we set up and
then do a simple analysis and then understand how can we make use of it.
Why did I choose amplitude?
It is because it is easy to track and understand the user behavior.
It is pretty easy and simple to set it up.
So let's say that you're a product.
Say, let's say that your product manager wants to know if a user users
are using a feature and a creative funnel and a cohort in seconds, right?
So that amplitude can help can help you to do that pretty fast.
What's better, you can also send a dashboard to to your team without
any SQL or any analytics needs.
So it's all self-serve.
So that's the real time collaboration which we can get out of the
these enterprise focus tools.
Let's say that we take a example of signup, so a signup funnel, right?
So real world example, say we are losing users midway.
So we can break it down.
We can understand that we can.
Funnel such a way that we can start with signup start, and then the user enter
the email and then sign up success.
Let's say that if 70% start but only 20% finish, we 20% complete the signup
success, then we know there is a friction.
Maybe we test a shorter form, so aptitude.
Let's us run that habit test and see if more users complete sign up.
We can play around with variations.
We can reduce the step, or maybe we can introduce the accordion or we can
introduce a single page so that there are multiple ways to test it out and
roll it out to the certain segments.
Or maybe a same similar cohorts understand what is working, what is not working.
So those are the flexibility we generally get with final analysis.
And then experiments.
How do we actually integrate a, in this case, the amplitude
tracking like analytics tool?
So this is pretty standard and it's pretty straightforward
with most of the tools cases.
So all we do is basically we try to, import the Amplitude library or
maybe, so for each and every client and then each and every framework
there will be a respective library.
We need to input, and then we need to visualize with the API
key, which we which we get when we subscribe to the any any tool.
And then here we are looking at a example with React, where we just
import the amplitude and then all we have to do is basically just log the
event whenever we want to whenever the user takes an action or whenever there
is a navigation or any other action.
So all we do is basically pass the event name and then associated properties.
So in this case it can be the name or it could be.
Number of times clicked or the time spent, like you can pretty much, we
can send all custom properties into the event and then we can try to track them.
So what are, what is the optimist way of analytics setup?
So the idea is tracking starts with a nice plan.
So we cannot just log everything right, so that, that blows up the data.
And obviously it'll also there are tools.
Charge you based on the amount of data you're tracking.
So obviously we don't want to end up track each and everything, so we
have to have a plan, and then we have to have a procedure, or at least I
can see a structure to tracking or the analytics you want to target.
So in this case, we have to define a clean event list, like what we want to
track, and then what are the properties, which we may think will be helpful to.
Do the analysis, right?
So in the future.
And then also we can also have some standard patterns, like consistent
names, group related actions.
Let's say that when a user's login we can also use the user ID to connect the
dots, meaning like we can track, we can pass the user ID in the login session
so that all the action related to that user is available in that user list.
This way we see both anonymous and the logged in user behavior together.
So how do we run the AB test?
So let's say that we want to test if a new call to action increases the signup
versus not performing well, right?
So how do we want to test it?
We can do that with the Amplitude experiment.
We can split the users into two groups.
Let's say that A and B. So we call it as experiment A and experiment B.
A is the old design and B is the new design.
Then we can see how they're converting.
So we'll have the conversion rates for each cohort each.
Experiment.
Each cohort is there in the experiment.
And then see, we can see the, which which cohort or which
experiment is doing better.
So then we can get the get the percentages and then we can confidently say that which
version is performing better versus the.
So this is where the actual power comes in, and this is where the product
design and then the growth teams can effectively make use of running
experiments and then understand what is working for the, for your users.
So how can an engineer make use of the analytics?
So the best part is we can also debug with analytics.
So let's say that you ever released a feature and then
suddenly the conversion route.
So with analytics, we can spot that trend and filter by browser or version
and correlate with a recent deployment.
Also we can get into more deep dive, deeper and figuring out by session
replace, we can literally watch the user struggle, meaning like where the user able
to proceed versus where user got struck.
It's like time travel for debugging.
So what are we taking away of this session?
So the the summary or what the outcome is?
Basically after this session, we want frontend analytics.
We want to understand that frontend analytics isn't just about the
dashboards, it's about closing the feedback loop between what we build
and what is the user's experience.
So we can start small track a few meaningful events, use tools like
Amplitude amplitude to make a. To make those insight visible across
teams and then use their data to build confidently together.
So yeah, that's pretty much about high level, like what we can do and
what are the advantages and what can we make with threat analytics.
Thank you.
Thank you for giving this opportunity.
Thank you for listening.