[Too Long Didn’t Read]
In this case study, I describe my experience as a Product Design Lead in developing an advanced SaaS Platform Product, Opensignal Performance Intelligence Dashboard. Throughout the project, I collaborated with various teams to ensure technical requirements were met. I conducted user research, feature analysis, usability testing, digital prototyping, and more. I focused on creating effective visualizations and maintaining UI consistency across products while being scalable for future updates. I successfully contributed to the development of a powerful tool that provides insights into network performance improvements for mobile operators, regulators, and analysts.
And if you enjoy reading more…
This engineering & benchmarking analytics tool is a part of the Opensignal* Software-as-a-Service (SaaS) Platform Product family, a series of products that address the most common questions asked by different groups within mobile operators, regulators, and analysts. It offers a more complex drill down behind the gold and silver standard metrics comparing its older sibling Competitive Intelligence. The tool allows for metrics to be decomposed in various ways, providing insight into how to improve network performance and competitive position.
*Opensignal is an authority in measuring and reporting on mobile network experience and quality worldwide.
What is the business problem we are solving?
Performance Intelligence enables teams responsible for network performance and best practices to gain a deeper understanding of the factors driving overall network experience. By analyzing key metrics in various ways, such as by the time of day and content delivery network, Performance Intelligence helps identify areas for improvement and enables teams to take targeted actions to optimize network performance. It also allows for a breakdown of the mean into distribution to identify the impact of extreme user experiences.
Description of the key persona
CTO of EE. Reports to the CEO. Responsible for developing the company’s strategy for using technological resources. Ensuring technologies are used efficiently, profitably and securely, evaluating and implementing new systems and infrastructure. Has abilities to influence the industry.
Sound understanding of technology | Able to explain complex technological concepts in simple layman terms | Leader of the technical organisation within the company | Right-hand man of the CEO | Good standing in the industry
I need a detailed understanding of the key factors driving the headline positions shown in reports
I want to know how my company can improve its technical position in the market
I need to drill-down into underlying issues and see, for example, cities where a competitor is dealing with capacity issues, or understand how new device capabilities are being utilized on their network versus the competitors.
What should we solve?
Provide me with a more detailed understanding of the factors running the headline ranks shown in Competitive Intelligence.
Help me identify why my network is not doing well?
Tell me where in my network should I focus?
Explain to me the obstacles from the technical standpoint
During the process, I collaborated not only with the Product Management team but also with the Analyst and Data Science teams to ensure that all their technical requirements were met:
All reported metrics are accompanied by confidence intervals calculated using Opensignal’s data science techniques, in order to ensure their accuracy and reliability.
We offer three levels of granularity for all metrics: national, regional, and city-level, using standard, internationally recognized area definitions and datasets.
Top-level metrics are broken down into components, such as Video Experience into Load Time, Stalling Occurrence, Data Consumption, and Throughput, or Speed into Peak Speed and Speed Consistency.
Data can be aggregated over the last 30 or 90 days with historical trends maintained.
It is important to note that not all technologies (2G, 3G, 4G, 5G) are available for all metrics, which can introduce an inconsistency and be a significant challenge in design.
Studying cases with our internal experts served me in learning the mental models in analysis.
In the example below, two main factors affected Rakuten’s low speeds:
Only 1 frequency available for 4G (Band 3: 1825-1845MHz) + some regions limited to 5 Mhz
Average national speed was dragged down by low-performing regions such as Kyushu, Chugoku and Tohoku, potentially due to domestic roaming with KDDI where users are limited to 2Mbps.
Example of stalling. The chart on the left displays proportion of devices by average download speed experienced. Here we see a peak at the 3.0 Mbps bin and also a deviation from the normal distribution at the 6.0 Mbps bin.
On the right: stalling occurrence increases quite a bit during the busy hours (8pm to 12am).
Another example, where we see a decrease in video metric for one of the operators. Looking at other charts it’s clear that this operator is actively throttling some if their users.
Many metrics are built in an unsymmetrical way – metrics from similar groups have different properties, or they lack more dimensions, although it’s not ideal. Also, new metrics were on the roadmap with lots of unknowns so the system had to be scalable.
Workshop in which we discussed visualisation styles, their use for different types of metrics and drill downs.
Our key audience for Performance Intelligence differs significantly from that of Competitive Intelligence. While our new product offers profound granularity and serves an extra technical comprehension of the problem, we must be extra careful not to cannibalize our existing product. Additionally, from a design and development perspective, the user interface (UI) must be consistent across all our products, enabling users to navigate seamlessly and efficiently between them. However, in this case, we are delivering more complex and specific content.
Some examples of competitive solutions.
Performance Intelligence MVP
We allow different types of visualisations and drills down of our golden metrics and their submetrics.
The art of visualisation
To address these challenges, I began by studying and following the guidance of the Interaction Design Foundation on best practices for visualizations. I created dozens of visualizations for various types of drill-downs, carefully selecting the best ones based on user testing and feedback from colleagues with a scientific, technical mindset. It was a challenging task considering the wide variety of data points generated from various methods, billions of measurements updated every week, and the need to compare them in different settings to draw conclusions.
Later on I experimented more with visualisation styles and drill downs
User Interface (UI)
Maintaining consistency with our first product, Competitive Intelligence was paramount. The UI design also had to be scalable since Opensignal continuously develops new metrics. To achieve this, modularity was the key focus of the design, along with scenarios that anticipate the adaptation of metrics to ensure that the design remains relevant for a long time.
During the design process, I regularly asked our consultants for their feedback, as they had a deep understanding of the industry’s reality. The design process is iterative, and we constantly incorporate feedback to improve our product.
Above that, as per client feedback provided after implementation, we’ll introduce an overview page, early indications of changes and a comparison between different metrics and dates.
Instead of the literal option of comparing the charts, I proposed as a concept the ability to create customisable layouts and grids, which brings this advantage, that clients could build their own views, corresponding to their everyday needs.