Part 3: Measuring success through quantitative insights

The key question everyone should ask when a product or service goes live is ‘can we prove it’s been successful?’. This post looks at how to gather metrics to demonstrate successful project delivery. 

Working in digital means, almost everything we create can be measured because where there’s technology, there’s data. It’s this data that provides us with insight in understanding success or failure.

To fully understand the success of the new product and justify its spend, we must first benchmark as-is data, so we have something to measure against during the design phase and after launch. Measuring success is not an afterthought, it’s a process that starts with the business case (and should ultimately help shape the reasoning for the business case in the first place based on KPIs / ROI predictions).


Typically defined metrics for our B2B clients

For our B2B clients, typical as-is data is captured manually through ethnographic studies to observe and record dependent variables of key tasks such as Time, Success, Error failure and Efficiency wastage. For B2B clients this is typically a manual process because of limited recorded analytical data for common tasks being completed.

Other methods include creating pre & post-task completion surveys for internal staff. Whatever the method(s) being used, it’s important a minimum of 40-50 responses are captured for each task because measuring success can only be reliably achieved through quantitative analysis.


Typically defined metrics for our B2C clients

For our B2C clients, where existing live sites/systems, applications exist, defining the as-is state is typically achieved by recording existing analytical data (such as Time, Clicks, Completion rates and Repeat visits). Other specific measurements for capturing the ‘as-is’ states include ‘Interception surveys’ on existing live sites and through SUS scores (System Usability Scale) to record a user’s attitude of the existing site. Other options include NPS (Net Promoter Score) and CSUQ (Computer System Usability Questionnaire).

The recorded scores, along with dependent findings, become the benchmark data that all future improvements should be measured against. No matter what measurement techniques/tools are used, identifying trends and statistics can only be achieved through large volumes of data (achieved through quantitative studies).


Measuring success on proposed concepts

Once concepts (represented by wireframes/prototypes) are ready to receive feedback within the evaluation research phase, the benchmark as-is data is used to help shape the tests script to ensure we’re asking the right questions. We then use a range of different quantitative analysis tools to directly compare the proposed concepts and gather reliable results so we’re making future design decisions based on fact, and not subjective opinions. This feedback process and analysis are all completed within a typical agile project methodology structure.

Forrester estimates that for every $1 to fix a problem during design, it would cost $5 to fix the same problem during development. Even worse, if a problem is not spotted until after release that price rockets to $30.

The benefits to measuring success

There are huge benefits to measuring success throughout the design process. It can build confidence within the design phase that the new proposed experience is suitable. And if not, then we know we should look to pivot design direction without committing to code being cut (which is 5x more costly).

By gathering user feedback and measuring success ensures we’re reducing subjectiveness where possible and increase focus on making decisions based on fact. This can certainly be a culture shift in some organisations (both large and small) but it’s one that will certainly benefit the business because we can start to reduce unnecessary distractions about design, reduce ambiguity of the design direction and ultimately ensure we’re delivering a truly customer-centric solution.

Once a product, website or application goes live key business stakeholders will want proof that there is a tangible benefit to the business, through greater conversions, efficiencies, satisfaction and completion rates… which leads to a more profitable business. If you can prove success and business benefit, then why wouldn’t future funding be approved?

Quickly prove something will fail!

Big tech businesses like Google pay bonuses for people to kill projects. The quicker you can prove something is going to fail then the more money you’ll save and reduced customer negativity. Decisions like this are based on reliable insights and stats.


Related posts

Part 1: The role of a UX designer.

Part 2: Understanding the needs of the customer

Let’s talk.


Get in touch today with our Head of XD, Andy Wilby, to discuss more about how our years of UX expertise can help improve and innovate your digital experiences for your products – across responsive websites, native apps or business systems – based on 20 years of delivering outcomes to help businesses and organisations thrive by realising their potential.

Andy Wilby | Head of Experience Design

07595 878876

Helping our clients achieve their aspirations…

D2C Investment Platform: We designed a mobile-first experience to increase engagement and education of personal investment products, helping to solve 'wicked problems' entrenched with inertia...

Find out more

Image relating to the company being showcased.

NHS Digital: We were integral to designing, developing and delivering an award winning initiative for NHS Digital that had a specific aim of improving care for people with mental health issues...

Find out more