Transformational Drivers and In-Memory Data Grids
Reposted from Hazelcast: One of the rewards of a career in technology is experiencing first hand the constant, pervasive, game changing nature of innovation. Every few years something really big comes along, and the entire world leaps forward at once. Recent examples include the rise of the Internet, followed by mobility, followed by the move to the Cloud, followed by social media, and now followed by Internet of Things. These are global ecosystem drivers, all of which have a vast array of enabling technologies that we as consumers or business users interact with 24/7. All of these technologies are accretive, each new capability enabled by what preceded it.
The internet wired the world together, then mobility made it significantly more convenient (to the tune of billions of mobile devices continuously accessing and uploading data into information and operational systems). Now the big disruptor is trillions of things following in the footsteps of billions of users. All of this reflects the digitalization of humanity, and, as has happened repeatedly, the layering of additional technology is game-changing.
The term “Volume, Velocity, Variety” has been around since the early 2000s, and at the time business was getting wrapped around the axle on the surge of mobility. In 2000 there were ~740 million mobile users globally, current estimates for 2017 put that number at 7.5 billion (a 10X increase), and this does not include IoT devices, which are an order of magnitude higher. You can safely assume most businesses are not racing ahead of this growth; every time we’re about to catch up something bigger comes along to accelerate it even further.
Given all this, how can a business (represented by the system being accessed) keep up with this level of incoming volume of data?
There needs to be a fundamental shift in terms of how high volume systems are architected. Traditional client/server systems are not meant for this level of big data, and while Cloud is a great underlying enabler, the way transactions are processed needs to be managed differently. All of this volume and variety of data has been traditionally been kept in a database, which needs to be accessed continuously–think of a bank authorizing millions of credit card transactions every minute across thousands of e-commerce sites–every transaction triggers a series of algorithmic responses, consumer’s identities need to be verified, consumer and vendor accounts need to be updated, third parties need to be looped in, etc. All of this needs to occur so quickly as to not be noticed by a fickle consumer. If the bank needs to move from an e-commerce site to a database while traversing a network of indeterminate latency, and do so in milliseconds, it requires a new level of enabling technology. If the bottleneck is the database (accessing the database itself or network latency), then the trick is to put the information needed to process the order where it can be accessed instantly, globally, and at high volume. This is where in-memory data grids (IMDG) come into play; data that normally resides in a database now sits in memory, where it is accessed in milliseconds, and all supporting processes (security authentication, etc.) happen so quickly it appears to be instantaneous.
For this to reach its full potential, several drivers need to be in place; all processing infrastructure needs to move to the cloud where it can scale up or down as needed (since demand can be variable), and the rapid decrease in memory costs combined with cloud-scale infrastructure means that back-end systems that used to reside on a hard-to-access, on-premise database can now be place in memory, accessed instantly, and at a much lower cost.
In-memory data grids have taken root quickly and deeply because they deliver against requirements that even a few years ago would have seemed impossible. Most end-users are unaware of the sophistication of the resources supporting them (the sign of a well-designed product), and technology enablers are starting to deliver well below SLA thresholds. Combining cloud and IMDG technologies creates a game changing ecosystem. Infrastructure becomes scalable, letting you immediately tailor your supply of information to your demand for service.
Because everything now resides in memory, companies are able to run multiple algorithms nearly instantly. What does this mean? Your credit card company can not only run a security check when authorizing a purchase, there are multiple customer service and operational fine-tuning algorithms that can be run in literally a few milliseconds. By moving the compute process completely into memory, there is no network latency associated with database access. Big data requires big solutions, and IMDG has completely upended the model on how business can stay well ahead of the curve.