We are clearly in the cloud computing era. We still use our own personal computers but mostly just to access centralized services like Dropbox, Gmail, or Office 365. Many devices now rely on AI and data that is stored in the cloud. Most of our day-to-day processing is hosted on infrastructure from a few cloud providers.
Everything that could be centralized has been centralized, which is why companies are now looking at the “edge”. Edge Computing is computing done as near as possible to the data source, instead of relying on the cloud at a distant data center. This does not mean that the cloud is disappearing; it is simply relocating to nearer the data sources.
One of the most significant advantages of Edge Computing is that it reduces latency. If a computer has to ask another computer far away before doing anything, the user perceives this delay as latency. For example, currently voice assistants must refer to the cloud to answer every request, and the resulting delay can be very noticeable.
This explains a recent rumor that Amazon is working on AI chips for Alexa. These would allow Echo devices to rely much less on the cloud; reducing latency, lowering server costs, and giving better privacy to the user.
Privacy is the other great advantage of the edge. Doing encryption and storing biometric information on the device reduces vulnerabilities. The computing load is distributed at the edge, but the allocation of the work is managed centrally. Good management of edge computing is crucial for security, as we have seen with botnets being created from the IoT devices of unsuspecting users.
To this end Microsoft is working on Azure Sphere, a managed Linux OS, certified microcontroller, and cloud service. This would make all IoT devices much harder to hack as well as centrally updated and managed. We expect that in the future most new hardware will be automatically updated, and security managed centrally, to avoid any risk of rogue botnets being created.
Higher security is not the only way that edge computing will help solve the problems presented by the IoT, it will also save bandwidth. Not needing to stream all the data collected by your devices lowers your bandwidth needs significantly. This can be achieved with devices that can decide what data is important in order to discard the rest.
Google is even working on making websites run more processes at the edge to reduce bandwidth needs. A good example of this idea would be PWAs (Progressive Web Apps).
The most remarkable example of where edge computing is needed is in self-driving cars. The latency, privacy, and bandwidth needs of a self-driving vehicle mean it cannot rely on the cloud. No latency is allowed if you need to swerve to avoid an obstacle, and you cannot rely on an inconsistent cellular network to stream such essential data.
But cars also represent a full shift away from user responsibility. A self-driving car must be managed centrally. It needs to get updates from the manufacturer automatically, it needs to send processed data back to the cloud to improve the algorithm, and it definitely must not be allowed to be hacked and made part of a botnet.
Most large companies are now embracing some form of Edge Computing, and we can conceivably predict that more and more will adopt edge computing in the future. This is where the future of the IoT lies and where Apache Kafka is best implemented in conjunction with Waterstream.