Conf42 JavaScript 2025 - Online

- premiere 5PM GMT

Cross-Cloud Campaign Analytics Meets JavaScript: Building Scalable Frontends for Big Data Insights

Video size:

Abstract

Explore how JavaScript and big data come together to build responsive, cloud-native analytics dashboards. From API orchestration to sleek React interfaces, learn how we turned backend complexity into front-end clarity for campaign intelligence.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello everyone. I'm Ti Ira Herrera an independent researcher based in Canada. Today I'll show how cross cloud campaign analytics and modern JavaScript come together to deliver past trustworthy in insights for marketing and growth. So if we, if you have ever waited on a slow dashboard or try to connect data that lives in different places the session is for you. So here's our path. I will outline the real problems, team faces, and then the end to end architecture. I will open the data engineering layer and connect it to the browser, explain the react patterns, show how we handle large data sets, discuss embedding business intelligence and end with storytelling, role-based views and performance disciplines, collaboration and concise takeaways before a brief set of questions. So here the modern analytics challenge. So the marketing teams collect more data than ever, and yet struggle to answer simple questions quickly. Channels report differently. Schemas drift and extract arrive late. So the confident erodes when two dashboards disagree. So what decision markers want is fresh, reliable number that respond as they explore. JavaScript is the bridge between the distributed computation and the human understanding. It turns, aggregates and applications into an interface that invites discovery. Imagine a campaign manager asking which creative drove a stronger re returns last week in major cities after a specific budget share. So the backend may already hold the truth, but without a responsive interface, it stays locked away. Our aim is to reduce the distance between the question and a trustworthy answer. So here is the architecture of the fueling market intelligence. So a picture of flow from ingest to transform to serve to visualization. So data first lands in the durable object store that welcomes partitioning by date and by the source. Orchestration service schedules, movement and health checks. So late or missing data is obvious rather than mysterious, scalable compute performs a transformation and model scoring, producing tidy queries which have like ready table. We expose only the shapes that the application needs through the guarded application programming interface with strict limit caching and Page Nation. The React application is the translation layer that turns small, precise payloads into filters, narrators, and chart. Each stage is optimized for a different constraint. Storage focuses on cost and durability. Compute focuses on throughput and correctness. Serving focuses on safety and latency. The browser focuses on clarity and the flow. Why do we need cross-cloud? Cross Cloud is a choice to combine strengths while keeping options open storage lives where economics and ecosystems are stor strongest. Transformation runs where entity identity, governance, and managed sparks you the team. We reduce data movement by processing close to sources and returning aggregates that the user will actually consume when prices, compliance and scale needs change. The system can shift workloads without rewriting the entire product because the contract to the front end remains constant. When we are deep diving into the data engineering, we majorly focus on the storage layer history, Azure Data Factory for ETL and Azure Databricks for processing. So in the lake we partition by day and by source, so queries scan only what they need. Lifecycle rules push cold data to cheaper triers without breaking lineage. Incoming drops are checked with manifest files, so missing partitions and record count mismatches surface. Early in orchestration, we run batch and micro batch pipelines. These have the parameters for the dates sources. So one pattern covers many feeds. Let's include the failing step, the impacted partition, and a list link to the run, which shortens the recovery in the compute layer. We store curated tables in columnar formats with constrained schema evolution. Deterministic, transforms, make outputs, reproducible, attribution, logic, and deduplication happen here not in the browser. And aggregates arrive at the cranes that product managers actually use, such as daily by geography, by channel, by creative. So role level security applies at the query time using user identity claims. So each tenant sees only their own groups. The serving layer enforces payload caps and compresses responses. Hot endpoints benefit from short server side caches. So surges do not herd downstream systems. What reaches the browser is small and honest and reliable and repeatable. Processing scale in numbers, these have like the daily data volumes even per day and query response and the uptime. SLA. So the daily data volume can be anywhere between one TB to number of gbs and even per day individual user interactions are like tracked and analyzed across like all the campaigns. So the SLA time is maintained to keep guaranteed availability of the dashboards. So from backend to the browser is the point where the understanding happens. We request only what we need, render quickly, and design interactions around real questions. A small initial payload confirms that the system is responsive, deeper details arrive when the user shows intents. The feeling should be of the focus rather than the noise. Here is the React dashboard architecture, which has four major components like the component library, state management, API integration, and the performance layer. In React, we separate data state and presentation. So the code base stays calm and the features grow. Reusable. Chart components share the same grammar, so title access, and empty states. Feel familiar filters such as date, geography, channel, and creative. Live in the central store with memorized selectors that prevent unnecessary updates. Network re requests automatically include authentication. The retry with care and still requests are canceled when the user changes direct. The interface updates immediately when the, when a filter changes and the data fills in a moment later. This patent gives confidence that every action was understood. So handling the large data sets on the client side, large data sets do not require heavy interfaces. We virtualize long tables, so the document only contains what is visible. We load progressively sending a high level summary first and fetching details on demand. Small roll ups, run in background worker, so the main thread stays responsive. Rapid filter changes are ou, so only the final intent triggers work. Payloads are cabbed and compressed. So ordinary laptops on ordinary networks feel fast. Coming to the integration pattern with the power ai, many organization rely on the existing business intelligent assets. We embed those reports so they feel native single sign on carries identity into the frame. Event handlers keep filters synchronized in both directions, so a choice in react reflects in the embedded visual in visual, and a selection in the visual updates. The surrounding state exports to portable documents and spreadsheets are initiated from application itself, which keeps people in one place while sharing the results. So visualization, storytelling with the data, every chart should speak in full sentence. The title states the conclusion rather than the category such as Creative B outperformed creative A by 18% in cities with population above 1 million. A timeline shows what changed before and after the key end benchmarks. Provide context for whether a number is good or may. Really loud and annotations connect movement in line to business decisions, which is how numbers become actions, role-based analytics view. So executives need a comp summary of return on investment spend, pacing and notable movers. Campaign manager, on the other hand, would require a tactical surface that crosses channel, creative and geography with a clear next step. And analysts need freedom to slice and export the data. We present distinct entry points that share a single source of truth performance of optimization strategies. There are majorly four strategies we could focus on starting with the code splitting. Smart caching data page Nation debouncing and trotting performance is not a finishing touch. It is a promise. We split codes, so heavy areas load only when visited. We cache on the server and in the browser with lifetimes that match how fast the data changes, we page it on the server and prefetch the next page while the user reads. We track time to interactive first content, full paint and the 95th percentile of application programming interface latency. And we treat regression alerts as product bugs to fix immediately. Speed comes from alignment, and we write interface contracts first, so the front end and the back end build. In parallel, we review payload sizes, query patterns, and frame rates together. We keep a living page for decisions and objectives, so the newcomers join the story quickly. After incidents, we write short harness notes about what changed that freedom keeps us shipping. So the key takeaways that I got from this project is that JavaScript is the bridge from complex cross-cloud analytics to human decisions. Performance is not negotiable. The biggest wins appear when platform data and design teams works as one and measure their promises. I am happy to take a few questions if you are deciding what belongs on the server and what belongs in the browser. Keep joins and heavy aggregation on the server and leap only small roll-ups to the background worker. If you are thinking about tenant safety, enforce roll level filters on the server using identity claims and never rely on the client provided filters. If your stack uses a different warehouse or cloud, the architecture is still portable because the contract to the browser remains the constant. Thank you for your time. I'm ra and if this session sparks some ideas for your ATech stack, your friend and performance or your cross cloud strategies, I'm glad to continue the conversation. Enjoy the rest of the conference.
...

Sruthi Erra Hareram

Data Engineer @ TELUS Engineer

Sruthi Erra Hareram's LinkedIn account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content