5G and In-Memory Deliver on the Promise of Genuinely Cool Technology

Another massive, transformative technology wave is about to hit, and while this won’t be quite as overt as some of the other waves (e.g., mobility, social media), its effect will be much more pervasive and will hit every consumer, business and industry on multiple levels. The next generation of mobile infrastructure, 5G, is in the advanced planning/early rollout stages globally. Similar to other introductions of mobile technology, the deployment of 5G will be uneven at first, then it will lock in and fundamentally change our lives.

The primary value-add for 5G is a massive increase in speed; almost 100X compared to today’s fastest networks, plus far greater capacity – so no more latency due to overcrowding. The funny thing is, increasing network speed by 100X is not going to be noticeable to most people. The existing mobile networks we use for video conferencing, mobile calls, streaming entertainment, etc. work adequately enough. Jacking the speed by a factor of 100 is not going to create a 100X better experience; humans’ ability to process information is much slower than a computer, so we’re already close to our limit – a blink of an eye can seem like an eternity to a computer.

So where 5G is going to move the needle will be in machinery that requires faster processing speed. Remote sensors, the Internet of Things (IoT), telemetry and autonomous systems, all those network-based doodads are going to experience a heavy acceleration in their ability to execute because a 100X speed increase will be noticeable to them. When you combine the capacity and speed of 5G with the real accelerative effects typically delivered by in-memory systems, not to mention super-speedy chips like the new Intel® Optane™ processor, the whole framework for execution becomes way more interesting.

The catalyst for application deployment will be the enablement of edge computing, where the computational resources move to the edge of the network, such as an autonomous vehicle on a freeway, sensors on a remote drilling rig, etc. Even though the networks themselves will be a lot faster, traversing them will still be a requirement and the new generation of applications will require speed that is hard to grasp. This is where the use of in-memory technologies with next-gen chips, delivered through a lightning-fast cloud will play well.

This perfect little storm opens up a wealth of incredibly compelling applications, such as:

Gaming – This will become even bigger than it already is. More and more people can’t seem to separate their eyes from the screen for even an instant. So, what happens when a genuinely immersive experience enabled by an embedded streaming engine in a set-top box delivers instant haptic feedback becomes globally available? As the father of a teenager, not too thrilled about this one, but the technology is undeniably cool and will generate vast revenue for both the gaming companies and the haptic hardware providers.

Medical – Touch-sensitive haptic gloves connected to remote surgical robots with lightweight, embedded in-memory processors, manipulated through a high-speed cloud infrastructure. This type of application is already being tested at places like King’s College in London, and once this gets into full-on production, it could expand advanced healthcare delivery to remote parts of the world that are currently underserved for both diagnostic and surgical applications – potentially saving a lot of lives.

Smarter and faster anything – An obvious example? Smart homes and buildings. Large increases in both down and upload speeds, coupled with speedier edge processing (and presumably at a lower cost) means a leap in smarter infrastructure connected to a more intelligent ecosystem that anticipates your needs. Example: the sensor-enabled milk container in your smart refrigerator indicates it’s past the expiration date. The fridge then sends a message through your high-speed Wi-Fi to the local grocer, which deploys an autonomous vehicle (or a drone, if they can figure out how to deliver without dropping stuff from a 10-foot height) and brings fresh milk to you. Your office system can be updated by smarter printers who can order more ink cartridges before they run out, your smart car can self-diagnose, order parts and services, and drive itself to a repair shop while you’re asleep and drive back home in time for you to go to work.

Drones – Drone to drone communication will enable these flying annoyances to keep from running into each other and provide more coordination for uses such as firefighting, disaster relief, livestock management, remote facilities inspection, etc. The list here is nearly endless, but the point is that by having embedded in-memory streaming engines, the drones will know what to do and act as a swarm, rather than being managed individually by an operator. 

Autonomous vehicles – This process has already started, driverless cars are creeping around the San Francisco Bay Area now, and pizza delivery robots are all over the campus at UC Berkeley. Wider deployment requires a much faster network connection that can deliver capabilities like vehicle-to-vehicle (V2V) communication. Consider that for an autonomous vehicle a millisecond delay could mean the difference between an accident or not; while we don’t operate at microsecond speeds, the systems that manage our infrastructure will require it.

An in-memory computing solution, such as stream processing, allows the operationalization of Machine Learning infrastructure – a critical enabler for Artificial Intelligence and other applications – to drive edge computing deployments. What will move the needle in these examples is not just running an embedded “agent” at the Edge, but running an integrated platform that can deliver app-based microservices in support of data-intensive environments. Everything described here is in various stages of happening now, and while it may not be pervasive, it will be a lot sooner than you think.