Smart locks, connected lighting, smart appliances and other IoT devices are becoming increasingly common in our everyday lives. They offer great convenience, but they also have the potential to offer headaches if they cannot work together. Imagine if you will, obtaining a connected product for your home only to have it work incorrectly, inconsistently, or not at all. As IoT products become more common and integrated into our daily lives and the wireless ecosystem, interoperability is critical to ensuring they work well together.
Achieving optimal interoperability requires understanding the various tests, standards and protocols that address functionality and security as devices interact. Currently, there are no industry regulations or guidelines focused solely on interoperability. Manufacturers are left on their own to assess risk, conduct testing, and make necessary adjustments to ensure interoperability of their products.
Start by considering factors that could present challenges when a device interacts with others. These may include:
- Other devices on the network, their software, origins, reliability, and potential cybersecurity issues
- Access control, via the network and other devices
- Possible disruptions to the connected eco-system
- Default or hard-coded credentials
- Vague/imprecise paths for updates
- Open ports
- Performance within intended environment
With these considerations in mind, build a test plan by identifying evaluations that can address the identified concerns. Assessments might include those for performance, security, compatibility, or a combination. Evaluations should include a mixture of automated and manual testing. Objectives, resources, and processes, as well detailed timelines and strategies for test completion, should all be part of the test plan. This will help to ensure a comprehensive program is in place before testing begins.
The testing itself must go beyond examining multiple devices in a room or lab to get results. Devices should be tested in real-world environments to realistic scenarios to be the most effective. This should be done at various points throughout the development cycle, not just with an end-product. One way to do this is to recreate a live environment, similar to the one where the device will be used.
The following evaluations may be useful when assessing interoperability:
- Simulation/automation testing, emulating real environments and usage: Used to evaluate scale, security, and reliability these assessments account for other devices, traffic, interference, data loads, or other concerns. Testing allows you to assess a device without using real boards or servers and allows for specific conditions to help identify problems.
- Usability: These evaluations account for end-use and human interactions. This includes running assessments for usability in a connected environment to ensure products meets expectations and requirements as devices interact with other products, networks and the overall IoT infrastructure.
- Performance: One of the most straightforward assessments in product development, this consists of validating performance across networks in a simulated real-world environment. Crowd-sourcing a large open alpha or beta test, or using tools like JMeter, can help identify weak points.
- Benchmark testing: Evaluate product performance against similar devices already on the market to compares to competitors and help plan necessary changes needed to compete with what is already available.
- Regression testing: Plays a key role in making sure that previously developed software continues to perform once it has been altered, interacts with other software, and/or when new features are added. It safeguards performance during development, updates, enhancements, and configuration changes. Testing can be automated and may lead to additional testing depending on the results.
- Cybersecurity evaluations: Ensures products keep data secure and do not infect other devices. A variety of standards are available, suitable for different product categories and risk profiles. These include standards such as IEC 62443, UL2900, or custom programs such as Intertek’s Cyber Assured. These help mitigate vulnerabilities, software weakness and malware through risk management, testing methods and security risk controls in a product’s architecture and design.
Once necessary evaluations are complete, relevant data must be collected, reviewed, analyzed, and saved. A product may require additional fine-tuning, testing and analysis. It may also be ready for the next stage of development, production, or distribution. However, after launching a product on the market, interoperability assessment and testing is not over.
Once a product is on the market, it requires continued evaluations to ensure it maintains its safety and performance. Manufacturers and developers must issue updates, upgrades, and patches on a regular basis to address any concerns. This will help account for industry and technology developments, new software platforms, emerging viruses/malware/threats, and competitor devices that may disrupt existing eco-systems. It will also ensure performance as the IoT continues to evolve.
About the author: Ray Balolong is a project manager at Intertek, where he focuses assurance, testing, inspection and certification solutions for IoT and software. He has a decade of experience in software assurance testing and holds a Bachelor of Science degree in Business Administration, with a concentration in Management Information Systems, from California Polytechnic State University.
Edited by Ken Briodagh