The business case for In-Memory Technology

Repost from Hazelcast. Any business that is expanding and leveraging transformative technologies, such as IoT or in-memory computing, is already testing the limits of their ability to execute. For vendors who deliver enabling technologies, this is a great opportunity to help innovation leaders really move the needle. Sometimes the business leads will say “we need this in order to do that,” and other times they’ll say “we have this, what can be done with it?”. The critical path starts by understanding what is driving the business and what underlying IT initiatives can be be used to address strategic goals. Timelines and dependencies matter immensely at this stage, if you can’t align to what is possible or factor in variables, such as other vendor’s technology, your initiative will not get very far. It is also important to focus; often we find disruptive technology can open up a wealth of opportunities, but in the initial stages it’s important to stay focused on a proof point that is meaningful to the folks who can write the checks. This also implies not only focus, but metrics that can be extrapolated to business results.

The people at the top of the organization are business leaders first, and rely on technology specialists and experts as a means to an end. It’s easy for technologists to get wrapped around the axle on details (forget the tree, look at the forest, that’s where the C-Suite has their gaze). Even the high level technologists (CTO, CIO) are driven by business considerations first (customer satisfaction, competitive offset, regulatory compliance, etc…). If you can describe your technology in a way that addresses what keeps them up at night, you will have their full attention.

In the case of in-memory compute technology there are multiple ways to get people’s attention, and of course, who you’re speaking to defines what you lead with. One of the drivers we are seeing consistently with our customers is the innovation that is possible when websites and the associated apps are able to run (literally) 1000 times fasterfraud detection jumps to a whole new level since you can run far more detection algorithms in the blink of an eye, artificial intelligence becomes pervasive and capable enough that people don’t realize they’re talking to a computer, transactions that took minutes now take seconds, things that took seconds happen so fast you don’t even notice. A whole new ecosystem of innovative applications becomes possible for those companies who are willing to push the envelope.

From the buyer’s perspective, costs tend to drop noticeably when you move to an open-source core; commodity computing costs less than proprietary versions and works just as well (or better, since you can leverage the open source community), transactions that used to run on expensive mainframes are offloaded to an in-memory cache (faster and way less expensive). Relying on an open-source model also allows you to avoid vendor lock-in (and things like Oracle audits), with the added benefits that you suddenly have much more control over your architecture and SLAs are something to brag about rather than fear.