Success of large distributed IoT implementation depends on the foresight in careful selection of the infrastructure (tools) to design and support a robust and reliable system. Together with trusted data source(s) and strong analytics, there should also be IoT infrastructure tools for reduction of downtime of edge devices/sensors/sensor connectivity.
However, not many PoCs progress to a feasible production system, as the proponents fail to see the big picture. There is a tendency to connect few sensors and devices and use the raw data to show some useful analytics. The availability of Azure IoT PaaS cloud or AWS helps create a secured and connected IoT. It may be a good beginning but lack of foresight in the application and thus, selection of supporting tools or infrastructure, invariably result in challenges which become difficult to overcome in later stages.
One notable challenge is scalability of the product or system. Limitations become apparent as the number of sensors and devices exceed, (say > 250) in the IoT system. Connectivity becomes a major issue as every device has to be brought up to same firmware/ middleware version. The challenges and limitations compound as edge computing is becoming ubiquitous. One has to update the algorithms in the device as well, using Over the Top Architecture (OTA).
Then management of the Application Program Interface (API) for any device, any cloud service needs to be monitored and managed as increasing connections potentially give rise to hundreds and thousands of end point possibilities resulting in failure. This requires a robust and automated management system for API logs using a “Watch Dog” tool.
Sensor data is useless without assured calibration and scheduled recalibration. IoT system has to manage and control the life cycle of the sensors and their calibration throughout the lifecycle of the sensor in order to produce trust-worthy data.
Why IoT/Cyberphysical system needs these supporting IoT tools ?
With so many edge devices these days , one needs automation to understand the condition of their health in-terms of memory usage, CPU temp etc. In the past this was no brainer as very little will be done inside edge. With edge analytic it is a different story.
Azure and AWS IoT PaaS cloud architects are aware of all of the above-mentioned issues. But in their attempt to address these issues they have succeeded only to a limited extent. For example, every IoT PaaS cloud offers connectivity management. But often, that is not enough because in reality IoT system connectivity can be a mix of many connections. A competent connectivity manager must track all the connectivity levels with their Tx/Rx level, connectivity logs and must be able to build a Machine Learning based model for connectivity diagnosis and API management as well. The end result, automation and machine learning will not only be applicable to IoT application, the whole IoT infrastructure has to be managed by automation and machine learning too.
MachineSense is offering six automation tools for IoT infrastructure. Details of the tools are below :
- Diagportal : Sensor/Gateway/Cloud connectivity management, edge build management
- Sensor QC/Calibration Management Tool
- Machine/System QC Management Tool
- IoT data simulator/ Data Twin Tool
- Edge device/Cloud system health check Tool
- Distributed Analytic API Management Tool
Connectivity and Build Management:
As IoT systems are increasingly tending towards a more edge-driven architecture, for a number of good reasons, managing a large number of edge devices in the field can be a monumental challenge. Connectivity of sensors, edge devices, gateways, their buffering capacity, energy harvesting, battery and the build compatibility with each other, can all be sources of large numbers of menaces in supporting the IoT system. Let’s just see a list of how many ways things can go wrong in a practical IoT system:
- Wireless connectivity can go down
- Sensor/Edge device may need power rebooting
- Sensor and Edge may have incompatible build
- Edge analytics need a special update for a couple of devices
- You sold a sensor two years ago and now the customer is bringing it online (can happen in Industrial IoT)
- JSON (or any other format) is not reaching the cloud (can be a connection or edge build issue)
- Edge build is crashing
- API is failing after a minor update in an edge
All the above issues are not a stretch of the imagination. Any practical edge-driven IoT system faces these issues day in and out. If, A-H has to be managed manually, there is absolutely no way one can build any IoT system in scale.
Just assume 100 sensors and 100 edges in a system in a factory. A very modest scale IoT system. A simple combinatorial analysis will show that the factory can experience 3200 IoT failures! That’s in a single factory. As the number of sensors will grow in a system, say to 10,000, the total number of failures will exceed 320,000. It is impossible to manually manage such a complex system without automation of the IoT support system.
MachineSense® has built and implemented a scalable IoT system for Novatec for the last 4 years and has learned how to manage A-H in the field via various levels of automation tools. SignaDiag is a set of automation tools to manage your IoT system in scale.
Sensor Calibration and QC
The biggest challenge of sensor maintenance is calibration. Sensors will drift in values as all electronics and transducers do. Depending on what the sensor is, after a few months or years, all the sensors data will be useless unless it is re-calibrated Traditionally, you send the sensor back to the sensor manufacturer and they do the recalibration. Many agencies like FDA and EPA demand recalibration after a regular interval.
There are additional issues of how to store a calibration certificate and apply them properly. On top of everything else, the manual calibration process can be lengthy and erroneous. If an agency doesn’t want any kind of fuzzing of calibration certificate, one can apply blockchain to store the sensor certificate.
SignaSensorQC can address all these issues. Since every sensor and its calibration requirement is different, this is not a product but a tool that can be linked to your IoT system, satisfying your calibration needs. If regulatory requirements demand stringent fraud-proofing of calibration certification, we can implement blockchain based auto-calibration that will be fully proven against any kind of calibration manipulation.
System Calibration and QC
One part of the IoT calibration issue is sensor calibration, another part is system calibration or system quality control. Each sensor is applied to something – maybe on a machine, a switch or on sports gear. When the smart machine or a smart thing is “born with sensors”, manufacturers become equipped with more choices to track them from birth to death, the whole lifecycle. Looking at the standard sensor signature (which is the sensor signature obtained from a good machine or a good “thing”), one can easily isolate a defective machine or thing in the production line. Moreover, transportation damage is very common in any industry. Once a smart “Thing” is born, after installation, it is easy to check whether there has been any damage between its birth and installation (mostly due to problematic transportation).
SignaMachineQC can solve many such issues..
IoT Data Simulator
Getting the right failure data or the data of a defective “Thing” is the most difficult thing in IoT analytics. The problem is compounded by the fact that, currently, critical asset monitoring gets precedence over ordinary assets. Critical assets like bridges, big machines, etc. can’t be destroyed to generate the failure data for the hungry data scientists!
Besides, to build the IoT system at scale, one needs to regress the system with 10,000 or more sensor data to see whether cloud services or gateway services are holding well.
A Failure data simulator is extremely important for serious IoT development. With SignaSmartTwin one can access existing failure data. New failure data can be simulated systematically as well.
System Dashboard – for all the Electronics and Sensors
Other key challenges, especially in IaaS driven IoT platforms, center around system health of the server instances, sensor electronics, edge electronics, processes, etc. Just, for example, a system of 10,000 sensors may consist of 10,000 sensor electronics, 1000 gateways/edge electronics and 100 servers. Any of them can go down or may need to restart or be diagnosed for a fix (many times for just a patch or system update). Server health data is available via API from the public cloud and one can extract the same level of API driven health data of Gateway(hub)/Edge device and sensor electronics. Besides, one needs to track all the system process data.
This enforces a unified Dashboard for Time Series data and an Alarm/SMS driven system to alert the system admin that a particular sensor or server has either gone down or may go down
In simple words, the whole idea of IoT is the automation or reduction of manpower to deliver better service and information. Now, while checking the health of the machine 24×7, if one needs to invest more manpower in IoT system health, then the basic promise of IoT automation is already defeated.
SignaSystemHealth makes the life of the IoT admin easy, by doing predictive maintenance of the IoT system itself.
API Management for Analytics/Systems
In a standard IoT system, it’s fairly common that out of a single sensor data stream, multiple real-time analytics will be extracted and displayed. We have seen in our experience, if one extracts a large amount of analytics, some of them may work flawlessly, some of them may experience failure due to bad data or bad connectivity, etc. Unless there is a system that crunches the analytic + connectivity logs + system health logs, it’s difficult to diagnose the source of such fine errors. This automation tool ensures a quick diagnosis of the analytic algorithm malfunctioning.