top of page

Key objectives for connected product trials


Connected Product Trials

Connected products are sufficiently complex that a field trial is an essential stage between workshop development and mass production. This paper discusses the key objectives of a trial and, most-importantly, how to tell when you’ve finished your trial and are ready for launch.

A trial is, by definition, an experiment. You hope everything will go smoothly but you should also anticipate surprises and expect to learn something. If you were confident that there’d be no surprises you wouldn’t be doing a trial. But neither is it an open-ended learning process, it should have definite goals, both of risk-reduction (catching mistaken assumptions that are costly to put right later) and of confidence-building (are we really ready to launch)?

Here we discuss three key objectives which, in our experience, determine whether a connected product trial is successful and helps ensure a smooth mass deployment.

1. Does the technology work?

This is probably the most obvious question a trial must answer. Internet of Things (IoT) technology stacks are complex, there are lots of components and therefore lots of integration points, and the technology is being deployed in the real world, which is a messy and uncontrolled place. For many IoT products the “happy path” where everything is working may be pretty trivial to code for, but there are so many ways that a product can fall off the happy path when exposed to the real world (and real users who don’t follow instructions) that this may be where the vast majority of your code and effort falls. Trials help uncover these unforeseen traps off the happy path.

It’s important to have an effective customer-support process in place for trials, to capture and diagnose issues – ideally to make them repeatable so that well-defined faults can be handed to the development team for fixing.

Trials must be large-enough to uncover problems at a quality level that will suit your production ambitions. For example, one device behaving strangely in a 100 units trial could be dismissed as a one-off, but a 1% in-field failure rate becomes expensive if you plan to ship 10,000 units. It may be sensible to plan for trials at multiple scales, or to enlist “beta” customers, in an early rollout stage where you can still be vigilant to problems.

Programmers know that good tests give you good “coverage” – that the tests have exercised every part of the code. Likewise a well defined trial will test every technical part. In our experience the important technical areas to test are:

  • the physical hardware which you can only change later at significant cost, e.g. durability, usability, power usage, battery life, memory capacity, anything on local storage (memory card) that cannot be remotely updated, antenna capabilities, etc.

  • software update technologies such as the ability to do upgrades in the field. Your production code will have bugs, missing features and security flaws. Upgrades are not optional, the cost of having to physically visit a production device can be significant, neither are software upgrades trivial to do (remember that complex stack of technology).

As you go from trials into production the question which you and your customers will want to answer is “is it working”? However, you may well discover that you don’t share the same definition as your customer of what appears to be a simple question. For example: Will you count situations where a device can’t possibly be working because the user has done something which prevents it working, such as not replacing batteries or switching off its internet connection? If a device is not working at a time when the user isn’t trying to use it, would you count that? Whatever the definition, you need to know what is acceptable, and whether your definition is the same as your customers’ as your trials conclude.

2. Does the product proposition fly?

A second essential goal of any trial is to prove whether your proposition “works” for the customer. Does it actually deliver the benefits that you’re promoting? One test of this is whether they are happy with it – would they recommend it to their friends/colleagues?

Another even better acid test is whether they’ll pay for it. Beware hypothetical answers to questions, e.g. “would you pay for this?” – what people say they’d do is often very different from what they actually then do. This is another reason why considering perhaps a second, larger phase of trial with paying beta customers is a good idea – if they’re having to fork out some cash every month then they won’t be shy about highlighting any shortfalls.

When designing a product, it’s very easy to fall into the trap of assuming that your customers will be just like you. But when you start trials you discover facts entirely unexpected to you – for example, that people don’t read instructions, not everyone has perfect English or eyesight, or understands how to put batteries in, or uses the product like you would. So how do you capture that information?

You can talk to users, reactively as they complain but also proactively reaching out to understand their experiences before they complain, and that’s really important for a qualitative understanding. User experience (UX) questionnaires are a great tool to gather such intelligence. They need to be carefully designed however.

A huge benefit of connecting a product to the internet is that it becomes possible to automatically measure usage and quantitative user experience, e.g. how often are customers engaging, how does this change through the lifecycle of the product from initial install, which features are they using etc. – basically doing for your product something similar to what Google Analytics does for web pages.

3. Are the support processes ready?

Trials will teach you which systemic issues are going to crop up as you go into production and scale. Examples from our experience include:

  • Batteries failing prematurely due to excessive power consumption

  • Difficult wifi setup due to interference/overload or firewalls

You can’t just put your head in the sand and hope that these issues won’t happen. A vital goal for your trial is to identify these systemic issues, allowing you to solve them fully or to some significant extent before mass deployment or failing that to design a support process to manage them.

Support tasks and analytics might be carried-out manually during trials, as you discover the need for them and reactively throw people at problems to patch them over and let the trial continue. But manual processes don’t scale into production, so trials are also about designing how to automate the resolution of these frequent support issues.

It’s important to understand how often these support issues will happen. If it’s a “10% problem” (1 in 10 of your customers will experience it) or a “1% problem” where you may want to react only when it occurs, but in some automated and scalable way. It is something that occurs regularly or once at setup start? In this way you can minimise your costs whilst still delivering an excellent customer experience.

A final point, which spans technology and process, is that delivering a connected product today is increasingly about weaving multiple services together. The big point is that by connecting your product you are moving beyond being a product vendor to become a service provider. It is important during trials to ensure that everything is working properly, that there aren’t any gaps. This is where you need a web app with sufficient data collection and analytics to give you this end-to-end view. This is also something you should allow customers access to run their own reports, to resolve problems themselves and if that fails to log support tickets with you, so that, if it doesn’t work it is at least easy to get help and get it working again.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page