Consider this: You arrive home one night in the dead of winter expecting your home’s heating system to have done its job as you programmed it (and as it alerted you it had done via Smartphone message hours earlier), but when you cross the threshold you are greeted by frigid air. You run to the basement to check the furnace, but there are no outward signs it’s broken.
You are about to call the plumber, when suddenly another message flashes on your Smartphone: “Welcome home. Cold enough for you? Please wire $1,000 to our untraceable account, or your furnace will never work again.”
While the above scenario may seem to come straight from a Liam Neeson film, such is the new potential in a world where everyday objects have been programmed to talk to us. In so doing, they are also creating new cyber-doors that could result in forms of intrusion, heretofore not even remotely considered.
There is little doubt that IoT brings near unfathomable potential for time savings, cost savings, and quality of life enhancements but, as with most matters, there are always two sides to the coin. The act of bringing these benefits to bear creates a number of risks and, even if the risk-to-benefit ratio is favorable, it will require mindful consideration to do it right.
Most of these risks fall along the subtly contrasted continuum of privacy and security. In this post, we’d like to draw out some of the distinctions between these two risk factors, and hopefully provide some clarity as to how application providers could go about thinking about these issues.
The first, and more obvious, potential cause for sticky situations comes with data security. No real surprises here save to say that many IoT devices carry with them some form of cloud or mobile service, a high percentage of which could be subject to increased instance of data vulnerability or security breach. Concerns fall under headings such as insufficient authentication or password strength, lack of transport encryption, weak web interface credentials, or insecure software update processes.
From a form-factor standpoint, the industry is already doing its part to institute a “security by design” mindset for IoT devices. Devices are now built with internal components that allow for wireless connectivity to be enclosed and protected.
But in order to bring security full circle, a number of additional measures need to be put into place:
- Conduct a privacy or security risk assessment for all products and applications before going live
- Encrypt all data that gets collected
- Minimize the data that gets collected and, more importantly, that which gets retained
- Institute automatic alerting for off-pattern data transmission
Interestingly, M2M presents some challenges on the data encryption front. The most logical method, SSL, can be problematic due to the additional processing power and memory it requires, and the increased cost for wireless airtime it represents. Rather, a more practical avenue to pursue is to create a site-to-site VPN tunnel from the M2M operator to the backend server’s network. This allows for encrypted data transmission across the most vulnerable segment of the network path – the Internet. Site-to-site VPN also creates efficiencies by not increasing the amount of wireless data consumed, and by offloading all encryption and decryption processing to powerful network appliances.
As far as monitoring data flow, any application worth its salt must be able to log abnormalities in the data it is receiving. If, for example, a device is set up to intermittently send sensor data but inexplicably breaks pattern, the system should be programmed to notify administrators and instantly block the device from communicating with the server. Going back to the site-to-site VPN tunnel, if such is in place then the misbehaving device will have a fixed IP address, making it much easier to isolate and block.
These “hard” measures mark the bare minimum for producing a secure M2M environment.
Moving to the personal privacy front, the challenge here mainly revolves around risks that stem from the collection of personal information. There is always a potential for backlash in the march toward monetizing IoT data.
Primarily, misguided personal data flow has the power to create actual risks to the safety of consumers at large. It is not difficult to imagine that captive information about an individual’s habits, locations, and physical conditions, collected over time, can lead to dramatic consequences if placed in the wrong hands. Or, on a more incipient level, companies might use data to make credit, insurance, and employment decisions against an individual’s knowledge or consent.
Perhaps most germane to this conversation, however, is that consumer perceptions about “too much data” could come to undermine the confidence necessary for connected technologies to meet their full potential in the first place. A cavalier stance on privacy is akin to cutting off one’s nose to spite his or her face.
The concept of “data minimization” really needs to be the driving force here. In all instances, companies should look to limit the data they collect to that which is necessary only for the discrete purpose at hand, and prepare to dispose of it once they no longer need it. Even if this practice stands to limit the innovative use of data over the long term, it doesn’t matter. Sacrificing those types of “Big Data” gains should be interpreted as the cost of doing business.
Good data minimization practices guard against two key privacy risks. First, larger data stores present a more attractive target for would-be data thieves, and increase the potential harm that consumers may experience from such an event. Just ask the likes of Target or Michael’s if they were happy to have retained such large stores of personal data about their customers.
Secondly, by gathering and keeping large amounts of data, a company just provides itself too much temptation to use that data in a way that departs from consumers’ reasonable expectations. Even with policies and practices in place that impose reasonable limits on the use of a customer’s data, that plucky new marketing executive may not be able to help him or herself in the race to impress upper management. But if the data’s not there to begin with, these possible missteps are never given the chance to blossom.
If a company determines that its business goals absolutely require a larger stream of data from customers, it can seek their consent for collecting those datasets, and live with the results for better or worse.
To sum up, all these great opportunities to better engage customers bring even greater responsibility. Customers need to know, and be able to approve, of any data being collected relative to their behaviors and preferences, and have it be completely transparent to them how the data is being used, how it is being stored, and what they stand to gain (or lose) by agreeing to share it.
A heightened sense of security and privacy are not, and will not be, viewed as points of differentiation in the present and future connected world – they are the expectation. As an application provider, we at KORE want you to understand what you can do to meet, and even exceed those expectations.