Summary
Service teams need to work out what success looks like for their DfE service. Collect performance information across all online and offline channels so that you can measure and show that the service is effective and improving.
Why it's important
Having clear objectives, a definition of what success looks like, and appropriate metrics can help you know whether the service is solving the problem it's meant to solve.
Collecting the right information and interpreting it will alert you to potential improvements you need to make and help you know if changes have the effect you intend.
How to meet this standard in every phase
You'll be assessed on what you've done to meet this standard at service assessments. However, even if the service you're working on is not being assessed, it's good practice to consider how you'll meet this standard point.
Discovery
Things to consider:
- define what a definition of success looks like for discovery. This should include both quantitative measures, such as specific research outcomes, and qualitative indicators, like user satisfaction, or conversion rates
- how a service may add value to users in your problem space
- access whether that value could reasonably be realised by developing a service, and only proceed if you think you can
- some data collection to inform baselining
Alpha
Things to consider:
- define what you want to measure, why, and how these measurements will be obtained
- analytics that describe time spent on pages, heatmaps, if users start something but don't finish it
- performance data and reporting to capture baseline measurements on service efficiency, timescales, service level agreements (SLAs)
- qualitative data, which could include insights from users on the current as-is service captured via user research and feedback
- evidence of finding baseline data, or why this is not available, with a plan to show how you will measure future success
- evidence of how the team has iterated and improved metrics and data collection plans as you learn more about user needs
- what metrics you could analyse to support or improve the service
Things to avoid in alpha
- putting measurements against data that doesn't currently exist in the as-is service. However, you may wish to explain the reasons that certain measurements could not be included when describing the measurement choices you have selected
Beta and live
Things to consider:
- measurement data agreed in the KPIs during alpha has been captured and collated
- quantitative and qualitative data has been combined to measure where the benefits are being realized
- demonstration of how the collated data provides evidence to show how the new service is performing for users
- evidence of performance data being used to make decisions about how to fix problems and improve the service
- engagement with business owners and stakeholders to help make decisions using performance data
- ways to collect metrics and data are iterated and improved as the team learns more about user needs
- regularly publishing performance metrics; these must include cost per transaction, user satisfaction, completion rate, and digital take-up
- when moving to live, learnings from metrics in beta are applied
Things to avoid in beta and live
- not capturing or analysing data needed to assess service performance
- failing to engage with stakeholders when interpreting performance data
Profession specific guidance
Each DDaT profession in DfE has their own community and guidance.