IT Management is becoming predictive – which is good, right?

Improvements in IT data quality and analysis tools have enabled IT management to spend less time looking into the past and more time enabling the dynamic enterprise of the future. This allows them to anticipate business events more accurately, forecast costs and capacity, and identify operational risks before they appear. Empowered by technology-driven insights and technology-enabled prediction ability, IT leaders have secured a long-sought seat at the table with their business counterparts during the strategic planning process. IT management becoming more predictive is good. Right? Perhaps, but there are some risks to consider.

Technology-enabled prediction is only as good as the underlying data, and does a poor job of addressing unknown variables. Human intuition and analysis skills have traditionally been used to fill gaps in available data, interpret meaning and project future events. The predictive abilities of most IT leaders are heavily dependent on the quality of information and technology-enabled processing power at their disposal. Modern machine learning systems have made tremendous strides in analyzing large volumes of data to identify trends and patterns based on past and current observations. Their capability to do so is limited, however, by the quality and dependability of data inputs. “Garbage in-garbage out” has been the rule for many years. Recently, advances in data validation and correlation tools have improved this situation somewhat and enable nuggets of goodness to be derived from what was once considered garbage. By filling integration gaps across data sources, resolving conflicts across data sets and validating data for quality/consistency, technology can now come very close to replicating what humans were able to do previously.

Another caveat is that too much granularity can lead to a false sense of confidence. “The weather app on a smartphone reports there is a 13% chance of rain next Tuesday afternoon, starting at 1 pm.” Accuracy aside, what does that number even mean? Technology enables prediction systems to use complex mathematical algorithms to combine sets of data and assess the probability of various outcomes. The results they generate may have the appearance of a high degree of accuracy, but it is always good to apply the principle, “if it looks too good to be true, it probably is.” A test of common sense and reasonableness should always validate technology-enabled predictions to avoid developing a false sense of confidence.

What if the predictions are wrong? Is accountability still in the right place? Business decisions are usually very important, and the impact of bad decisions on the organization can be catastrophic. Each individual within an organization has his/her own charter, objectives and motives, driving his or her focus and behavior. Business leaders are responsible for guiding successful strategy and operations while (in most organizations) IT leaders have a charter more narrowly focused on managing and stewarding information and technology assets. Having IT leaders contribute their skills and capabilities to strategy is an excellent use of their talents, but it is important to make sure the right business leaders remain accountable if things go awry.

The biggest long-term risk associated with developing a reliance on technology-enabled prediction is that business leaders lose their ability to evaluate the business environment and make decisions without technology’s assistance. Analysis, inference and prediction skills must be continually exercised, or they will atrophy and be lost over time. If this happens, then the organization loses the checks and balances to ensure the information generated from IT is correct.

Learning how to harness the power of technology and information and applying it to create valuable predictive insights for an organizations is definitely good; IT leaders should be commended for bringing new capabilities to the decision-making table. As we all know, however, no information is perfect, and technology has its limitations. Becoming entirely reliant on technology for prediction and losing the ability to apply a human filter is a risky situation for businesses. As with many business decisions, it is important to balance the potential benefits with the acceptable risk profile for your organization.

The application of predictive analytics to IT and OT (Operational Technology) systems has tremendous promise, both for IT and the enterprises they support.

The downstream impact of IT data improvements

How well prepared is your organization for growth? What are the challenges to making progress? One often overlooked constraint is the impact of IT data across an organization (in spite of the fact that nearly every decision made is based on data). Perhaps, given this, it is how you are thinking about your IT data? Could thinking differently about IT data provide the key needed to unlock a treasure of benefits for your organization?

IT systems and the data they contain for the organization is often seen as a foundational capability, an underpinning function, or simply a static resource separate from the organization’s core value chain. Instead of thinking of IT data as just “supportive,” perhaps you should consider it having its own value chain – a lifecycle of information that starts with raw data that is processed, refined and combined with other data; put into context; made available to users; and consumed in organizational decision making processes at both tactical and strategic levels. Framing data as a part of a value chain can enable you to see the downstream impact of upstream IT data improvements in the activities that consume them.

When the quality and reliability of IT data improves, leaders have more confidence in the decisions they make. They have the ability to evaluate opportunities and problems faster and more easily, without the need to question and independently validate the information they are continuously receiving. This increase in confidence can lead to the pursuit of more ambitious and more broadly scoped business opportunities, as well as the ability to preemptively mitigate organizational risks.

Data scientists, business analysts and managers rely on IT data as a critical input to drive business process optimization for their organization. Improving the quality of the data available from IT enables data professionals to see process-performance variances easier and faster, and to correlate previously independent data sets that can drive new operational insights. Self-service reporting and analytics tools that are gaining popularity with this community during the past few years are highly dependent on the quality of data from IT source systems to ensure reliability, accuracy and ease of use.

Data integration improvements across IT systems improve the efficiency of employees involved in executing transactional processes by reducing the need for redundant data entry tasks to keep operational data in sync as transactions flow through business processes. By removing manual tasks, managers and leaders have greater transparency into operational performance with a lower risk of intentional data manipulation and/or human error. Consistent data structures across IT systems have the potential of increased efficiencies by enabling the use of lower-cost reporting tools, minimizing the need for manual data reconciliation/scrubbing activities and accelerating the time to develop new data insights – leading to greater agility and business responsiveness.

In addition to moving faster, operating more efficiently and making better decisions, IT data improvements can expand the organization’s capacity to manage more data, which means more customers, more sales, more suppliers, more employees and more profits for shareholders. If these are benefits your organization is seeking, then perhaps it is time to make investments to improve your IT data.

Without the right tools, manually improving the quality of data just does not scale, as the types and volumes of data continue to increase at a staggering pace. Blazent has been a the forefront of managing “Big Data” scale implementations for some of the largest business  entities in the world, by providing an automated solution that delivers the highest data quality by using information gained from multiple sources to create refined data records.

Is data management a thorn in IT’s side?

Modern business leaders depend on data – it provides insights into operational performance, customer needs, supplier risks, market opportunities and essentially every other facet of business. More data is a good thing, right? It means the business is processing more transactions, selling more products and has more information at its disposal to make decisions. While this may be true, if the data in your organization is inaccurate or inconsistent, remains unchecked and continues to grow unmanaged, then it may quickly evolve from being an asset into a massive liability. Managing a company’s data is the responsibility of the CIO and the IT organization. Are they ready, or is data management a thorn in the side of IT?

At the heart of IT’s data challenges are people. During the past few decades, IT organizations have focused on recruiting and developing staff with deep technology skills in hardware and software, often forgetting that IT stands for “Information Technology,” not just technology. More recently, many CIOs have increased their focus on resolving this skillset mismatch by bringing more information management, data science and analytics skills to their organizations.

While IT data skills have struggled to match needs, business users who consume data have increased their data analysis abilities at an alarming rate. Self-Service reporting tools have increased in popularity during the past few years, making it easier for business users to analyze data from source systems instead of depending on IT to provide reports. During the past, issues with missing, inaccurate or duplicated data could be filtered, scrubbed and managed prior to exposing the data to business users. Self-service capabilities expose data issues directly to users who often will ask IT for an explanation of why the issues exist and want them fixed, usually immediately.

IT also struggles with the fluidity of business needs related to data. Because information insights are primarily used to drive decision making, the useful life of a specific set of information may be short. After one decision is made, the business shifts focus to the next opportunity or challenge. From IT’s perspective, as soon as one business problem is solved there are more waiting to be addressed.  Understanding this dynamic is helpful for IT organizations, but it may require a change in mindset and approach. IT staff members are notorious for over-engineering solutions. When it comes to data, they need to focus on agility, not permanence – delivering just enough information to enable the business decision (and delivering it quickly)!

High-value business insights almost always involve integrated information from disparate data sources.  When those data sources are from different vendors and software products, correlating and validating the data can be very complicated. Traditional integration and data warehousing methods built around manual mapping, merging and validation processes struggle to address the volume of data and continual changes in business needs.

Modern technology capabilities for data integration and validation can provide IT organizations the opportunity to overcome many of these challenges. Combined with upgrades in the data management skillset and mindset of IT staff, CIOs and their organizations can remove the thorn in IT’s side and be well prepared to manage their company’s data to ensure that it remains an asset that will continue to create value for many years.

A great example of this is Pacific Life, whose IT organization needed to automate the maintenance of more than 100,000 configuration items that supported their service delivery to the business, and was able to do so quickly and relatively easily using Blazent’s Data Intelligence and Integrity solution.

The rise of IoT, what should IT execs be most worried about?

The core role of IT executives is to drive strategic enablement by stewarding the information and technology resources of the organization. This means balancing the needs of empowering individual user productivity and business process efficiency against organization-wide security, manageability and cost considerations.

Those of us with long careers in the technology sector have seen a consistent, cyclical trend of disruptive technology. Every few years, something big appears that changes everything and IT is always on the bleeding edge. These include the rise of the Internet, the move to Software-as-a-Service (SaaS) offerings, followed by mobility, followed by social media and the consumerization of technology. Now, it’s the rise of the Internet of Things (IoT), which in scope and scale dwarfs everything that preceded it. Keep in mind, everything listed above is cumulative as well. It’s not like any of these inventions and technologies have disappeared, so the complexity of it all continues to increase.

Like everything that preceded it, IoT does not exist as a technology in isolation. It is part of a larger trend of hybridization of IT experiences. The modern perspective of IT is user-centric, defining IT experiences as a hybrid environment consisting of end-user devices, client software, connectivity to local or centralized resources, data, location and the context or purpose of the user’s technology interaction. IT experiences are often temporary in nature, existing for only a short period of time to serve a specific need, and then recycled into the IT environment. In this context, IoT is simply a new component in the business-technology ecosystem that is available for consumption as a part of the hybrid-IT experience.

Re-framing the situation around IoT in this way is helpful from the IT executive’s perspective because it allows IoT to be viewed as a variation of the mobility trend, which has been a focus of IT executives’ work for a number of years. IoT is very different from mobility on several levels, however. The source volume will potentially increase from billions of mobile devices to trillions of sensors, which drives a whole order of magnitude thing. Many IoT devices stream data almost continuously (e.g. security cameras), which are different than mobile devices, which tend to be transactional. Because of this, IoT brings its own set of security and manageability challenges that the IT organization must address.

  • IoT Data Going to 3rd Party Locations: While this may seem primarily a security concern, IoT devices streaming data to applications and services outside the organization’s control does pose a significant data integration challenge. The broad variety of sensor capabilities available from IoT devices has a tremendous potential to drive operational insights relative to industrial automation and process behavior. To mine these insights effectively, IT functions must have access to the IoT data and the capability to easily integrate it with data from other sources. Streaming of IoT data outside the organization makes this integration more difficult and costly (if not impossible). There are also potential legal implications if data is stored outside a country.
  • Digital Eavesdropping: This is a considerable problem in the consumer space, with documented examples of security cameras, baby monitors and (alarmingly) cars being hacked. In the business environment, incidents of digital eavesdropping are less publicized, but have the potential for tremendous impact. In the most extreme scenario, IoT devices integrated with cloud-based digital assistants (such as Siri, Alexa or Cortana) could be used to provide a would-be hacker or competitor a front-row seat to confidential office conversations. More likely, IoT devices streaming data to public cloud services managed by 3rd party device manufacturers create a data-security vulnerability and the possibility of sensitive data being intercepted. Recent hacks, such as Yahoo’s billions of compromised emails, are just the beginning.
  • Limited Administrative Capabilities: On the surface, it appears most IoT devices are designed for use in a consumer environment, optimized for easy plug-and-play operations. Below this level is industrial IoT, where a vast number of sensors are used for refinery operations, irrigation systems or other functions and processes. In both cases, the devices have limited administrative capabilities. This poses a challenge for IT functions tasked with ensuring service continuity across IoT-enabled experiences or reporting systems. Some IoT-device manufacturers are beginning to introduce enhanced administrative capabilities to address this need, but the enabled devices come at a cost premium. To support IoT devices in business or industrial environments, many IT organizations find themselves developing custom connectors and wrappers around IoT software to enable integration into an organization’s ITSM system.
  • TCO/Cost of Support: IoT devices in general have a very low up-front implementation cost to add them to an operating environment. There is, however, a persistent false expectation on the part of most users about the quality, reliability and expected useful life of these devices, as they are expected to be as robust as enterprise-class software. Because of the expectation gap, IT organizations are beginning to experience a significant degree of technical debt related to the support, maintenance and upgrade of IoT devices. Batteries must be replaced, parts wear out, and software/firmware must regularly be upgraded. TCO models in most organizations have not yet fully adjusted to support IoT components.
  • IT/OT Alignment: Another massive and disruptive trend that is still somewhat below the surface (but not for much longer) is the movement of IT and OT (operational technology) systems towards each other. IT has the analytics and reporting capabilities to enable execs to optimize investments in IT, while OT has its own capabilities to track a huge number of sensor networks related to both discrete and process manufacturing applications. To this point, these two domains have existed separately, but within the same organization; OT data is appearing more and more in IT analytic and reporting systems. These two groups have historically not seen eye to eye, and that must change.

IoT is a technology trend that is here to stay. Like all before it, it is not to be feared, overcome or mitigated, but, rather, organizations must embrace IoT as part of the new “normal.” IoT has gained significant traction in both the consumer market for technology and the industrial market for sensor networks, and in both cases is widely accepted as a useful productivity tool. IT executives are concerned whenever new technology impacts their ability to control the security, manageability and cost of their technology ecosystem. Fortunately, IoT is maturing fast, and organizations will continue to adapt.

As those of us with extensive IT and technology experience know, constant, disruptive change is the model, and this is always a positive development. The actual question is: how do we embrace change and leverage it for competitive advantage? The core issue is not the IoT itself, but what it represents. That puts the focus on one big requirement: data. Trillions of devices, billions of users, all of them generating data at rates that are still hard to grasp. If you’re struggling to stay ahead of the deluge, because of inconsistencies in the quality of the underlying data all these devices are generating, then it’s time to take a big step back and reassess your requirements. At first blush, this looks complicated, and it is. What it is not, is difficult. The right tools, applied at the right time, can make all the difference.

We’re not suggesting “Don’t worry, be happy.” What we are suggesting is focus now; put the right solutions in place and enjoy the benefits.

What your ITSM vendors don’t want you to know about Discovery Technology

Are you comfortable with how your IT Service Management (ITSM) processes address the volume, diversity and rate of change present in your IT environment? Comfortable enough to bet your job on it?

Most traditional ITSM systems and their associated offerings are structured around configuration management processes. These import lists of Configuration Items (CIs) from a variety of source systems, and then map the CIs together either manually or through available data. This has been the approach literally for decades, and has not kept pace with the breathtaking rate of IT change. As a result, this model severely limits the number of CI types that can be effectively managed in the Configuration Management Database (CMDB) and the rate at which environmental changes can be reflected. Current ITSM systems, by themselves, cannot keep up with this pace of change.

Configuration Management and the CMDB are the core of most IT Service Management processes. They serve as a hub to drive critical operational insights by representing the various pieces present within a business technology ecosystem and how the pieces relate to each other. Configuration data is represented using a series of lifecycle states including “as-intended,” “as-designed,” “as-implemented,” “as-operated” and “as-used.” Enterprise Architecture is typically concerned with the “as-intended” and “as-designed,“ with traditional ITSM processes and systems focused on the “as-implemented” state. Creating these states often involve significant manual efforts of inventory, mapping and modeling the components present in the IT environment.

What continues to be beyond the reach of most organizations are quality, dependable insights into how the ecosystem is actually implemented and how users and business processes are consuming various components. Ironically, it is these operational insights that organizations need most to identify cost reduction and/or value-creation opportunities.

The problem in nearly every instance is that IT depends on discovery tools to determine the breadth and depth of its service offerings. Discovery tools originated during the early 2000’s, which in Internet time may as well be a thousand years ago. The IT world has changed dramatically, and continues to do so at ever accelerating rates. The sheer volume of data entering any IT system is overwhelming, not to mention the variety and range of data sources. This entire scenario is also about to become much worse; it isn’t just that mobile technology is creating billions of entry points, but that the number is about to increase by trillions with the rise of sensor networks and the Internet of Things.

Discovery tools are meant to discover, they don’t address variables such as false positive, false negatives, lack of contextual references driven by fingerprinting capabilities, latency or granularity. These are the parameters that define the modern IT system, and the current range of discovery tools is not prepared for the task. This is why Gartner stated, “40% of an organization’s data is out of date or inaccurate” (sic). Discovery and the associated quality needed to be effective at providing a reliable referential framework are at the moment two different things.

The good news is that recent developments in the application of Data Quality Management (DQM) methods to Discovery technology are finally creating dependable snapshots of runtime operations of the ecosystem – which is a frightening prospect for many ITSM vendors. By applying DQM to Discovery, IT is able to identify CIs present in the operating environment, quickly classify them, and map dependency relationships based on both technical connections and time-based correlation of runtime events.

The value this hybrid technology offers IT organizations is tremendous. By eliminating the need for manual inventory and mapping of CIs, the Configuration Management system will now be capable of capturing and maintaining a larger volume and wider variety of ecosystem components, making this data available for more granular analysis. The addition of DQM to Discovery also supports a greater diversity of CI types by reducing the need for pre-defined classifications and structured relationships. CI types that are recognized can automatically be validated. In addition, previously undiscovered CI types can be captured and classified based on embedded data within the component as well as the context in which the component is being used within the environment.

Perhaps, the most important value of applying DQM to  Discovery (and a risk to ITSM vendors) are the capabilities to accommodate frequent changes in the operational environment (enabling the CMDB to be updated at the speed of business) and to discover relationships at runtime between users and technology components (enabling a more accurate, real-time view to be put into use). By having these combined capabilities, companies will have a much more accurate picture of what technology pieces exist within their environment, how they are being consumed and the value being created from them, including the cost of generating that value.

Most ITSM systems are consumed on a per-seat basis, and therefore the vendor’s revenue is directly tied to the number of human users – providing a disincentive to replace manual processes with automation. DQM with Discovery has the potential not only to eliminate many manual configuration management activities, but also during the future serve as a stepping stone towards automated event management, incident management and AI/Bot-based diagnosis and remediation activities (significantly reducing the need for human helpdesk agents). It is understandable that ITSM vendors don’t want their customers to know about Discovery and DQM technology. This is perhaps the single most disruptive technology threatening their future revenue potential.

Before committing to your ITSM vendors current Discovery capabilities, start asking the hard questions.

  • How do they contextualize data?
  • How do they address the batch disconnect?
  • How do they recognize false positives or negatives?
  • How do they track fingerprinting and account for it?
  • Etc.

Failing to ask these types of questions leaves you in the 60% club. If you were being graded, you’d earn at best a D- grade. Do you want to be a D student, or would you rather be an A student? The current batch of discovery tools is depressing an entire industry grade curve.

Benefits of Machine Learning in IT Infrastructure

During the next 5 years, machine learning is poised to play a pivotal and transformational role in how IT Infrastructure is managed. Two key scenarios are possible: transforming infrastructure from a set of under-utilized capital assets to a highly efficient set of operational resources through dynamic provisioning based on consumption; and the identification of configurations, dependencies and the cause/effect of usage patterns through correlation analysis.

In the world of IT infrastructure, it’s all about efficient use of resources. With on-premise infrastructure (compute, storage and network) utilization rates for most organizations in the low single digits, the cloud has sold the promise of a breakthrough. For those organizations moving to Infrastructure as a Service (IaaS), utilization in the middle to high teens is possible, and for those moving to Platform as a Service (PaaS), utilization in the mid-twenties is within reach. That being said, where is the promised breakthrough? The actual breakthrough in the efficient use of IT infrastructure will not occur from cloud adoption alone, but through the application of machine learning to dynamically provision the right scale and type of resources at the time they are needed for consumption.

Dynamic provisioning driven by demand is essentially the same operational concept as power grids and municipal water systems – capacity allocation driven by where resources are consumed, rather than where they are produced. This is possible as a result of a near frictionless resource allocation and transport infrastructure. When a user expresses a demand for an IT service, the resources needed to provide that service will be dynamically provisioned from an available pool of capacity to fulfill the demand in real-time. When the resources are no longer needed, they are returned to the pool for provisioning elsewhere. Infrastructure capacity reserved/allocated and sitting idle will effectively disappear because it will only be allocated when needed.

The second part of the breakthrough relates to right-sizing infrastructure. Whether this is network capacity or compute Virtual Machine size – machine learning will enable analysis of the patterns of behavior by users and correlate them to the consumption of infrastructure resources. Eventually, the benefit of machine learning in this scenario will be in the predictive analysis of infrastructure needs to anticipate and deliver more efficient resource allocation.

During the near term, these benefits will be much more tactical. Automated discovery combined with behavioral correlation analysis will virtually eliminate the need for manual inventory and mapping of components and configuration items in the IT ecosystem to reveal how the ecosystem is operating. There may still be a need for manual activity to articulate how the ecosystem was designed to function, but this can be done using a declarative approach that describes the desired behavior.

Today, IT has the opportunity to automate the mapping of components in their infrastructure to provide a more accurate and actionable picture.

Securing the Internet of Things

This blog post, covering the Internet of Things, considers security challenges. Security has always been a high order concern in distributed systems, even if it’s only dozens of devices. Now think of the increased scope of this challenge where people routinely speak of billions of connected devices communicating across global networks of all sizes and kinds.

Securing this kind of infrastructure requires a different approach, since perfect security is simply not possible. Operations technologists (e.g. those running manufacturing plant floors or civil infrastructure, such as power grids) have relied on physical separation of their systems, but IoT benefits are too great to continue siloed operations.

As Maciej Kranz wrote in his book, Building the Internet of Things, there are several aspects to the required security model:

  • A risk-based approach
  • Defense in depth
  • Joint IT/OT cooperation

Risk-based. It may seem obvious, but too many security conversations are not sufficiently well grounded in risk. Risks must be understood and quantified across the enterprise, and ultimately, the ordinary enterprise cannot have more at risk than it is worth. Even the costs of large and notorious security breaches, such as the 2013 Target Point of Sale hack, can be calculated.

Within mature security capabilities, therefore, security is nothing more than a specialized form of risk control (the well-known CISSP security certification is based on this premise). Risks emerge and require a lifecycle of identification, analysis, prevention, mitigation and continuous review. If a specific risk is no longer material, then better to spend scarce resources fixing a higher priority one.

Defense in depth. “Crunchy on the outside, soft and chewy on the inside” is a hacker saying for organizations that focus all their resources on perimeter defense. The Stuxnet worm showed that physical separation doesn’t work; infected thumb drives carried across the “air gap” destroyed sophisticated nuclear centrifuges with elegant efficiency.

There are many ways to make hackers’ life difficult, even if they’ve penetrated systems: network segmentation, data leakage protection, network traffic pattern recognition and more. One key element: distributed IoT devices “in the field” increasingly will be protected by video feeds – the cost of motion-activated cameras are decreasing and do not have the large bandwidth or power requirements; solar panels are even capable of powering some of them, so expect to see this as a backup capability more and more frequently.

Security must be “end-to-end” – every conversation and handoff must be managed over secured links, with trusted encryption and appropriate segmentation. Policy-based infrastructure managers (Chef, Puppet, Ansible, SaltStack, etc.) that continuously monitor and correct for drift from the desired state are essential. They should be monitored and well secured, as compromising them is to compromise the “keys to the castle.” Their reported patterns of “drift” may well indicate adversary action. Automated, intelligent analytics and predictive analysis are essential for scaling to IoT volumes.

Joint IT/OT cooperation. This series previously examined the IT/OT (Information Technology and Operations Technology) relationship. To re-iterate, neither can address security concerns by themselves, and the legacy of misunderstanding and mistrust between them needs to stop. Operational facilities are increasingly connected; the growing consensus is that the benefits (e.g. reduced staff visits) can outweigh the risks. Without a comprehensive security architecture protecting the IoT devices and communications, the risks are enormous. IT must apply state-of-the-art network security (authentication and access control, encryption and appropriate segmentation) and OT must contribute its domain expertise and the best thinking of its current suppliers. A clear, quantified and rational understanding of the risks involved must inform both.

Conclusion. Understanding and securing complex IoT infrastructures will always require the “ground truth” of knowing the devices, software and services. Rationalizing this essential data foundation is a complex problem, requiring state-of-the-art analytics. Blazent has the market-leading algorithms required for accurate inventories (which is core to identifying potential gaps or prospective breech points), as your information technology and operational technology scale into the new world of the Internet of Things.

Information Technology, Operational Technology, and the Internet of Things

This blog has previously mentioned the relationship between IT Asset Management (normally under the control of IT) and Fixed Asset Management (normally under the control of OT – Operational Technology). In equipment-intensive verticals, such as manufacturing or healthcare, OT is one of the largest categories of non-IT assets.

Examples of operational technology include plant floor control systems, hospital diagnostic and monitoring systems, transportation control systems, automated teller machines (ATMs), civil infrastructure (e.g. tollway automation and water management) and more. Traditionally, while these systems might be computer-based, their technology and communications were proprietary and specialized, and they would be physically isolated from corporate IT networks in the interest of security.

As Cisco’s Maciej Kranz wrote in his book, Building the Internet of Things , there has also been a cultural divide between Information Technology and Operational Technology. He states, “As the worlds of IT and OT begin to converge, a culture clash is usually close behind.” The idea of a weekend shutdown to update software might be unacceptable to operations groups, while IT organizations seeking standardization struggle with the proprietary nature of OT systems. There are many stories of poor relationships between IT and OT, which are independent of vertical or underlying technology.

The Internet of Things (IoT), and digital transformation more broadly, however, are challenging both sides to work together more closely. The IoT requires open, pervasive networking (with all the implied security challenges), and as it expands, networked connections increase, process touchpoints expand, data becomes more widely shared, and governance and control must be managed in a consistent and unified manner.

The scale of shared IT services required, such as network, compute, storage and analytics, is beyond the abilities of most OT organizations to deliver. If IT is to be a service provider, or even a service broker, then thoroughly understanding the needs of OT as a partner becomes increasingly critical.

Operational systems are an organization’s life blood, and IT has too often allowed itself to fall into a role as “order taker,” with a process-driven, back-office and bureaucratic mindset and little sense of urgency – when downtime in a manufacturing plant can amount to tens of thousands of dollars a minute. Conversely, relying on physically separated operational systems means that traditional OT groups haven’t had to build robust networking security capabilities – they are normally focused on controlling physical access. Organizational change management and attention to culture will be essential to navigating such differences. In addition, new organizational forms may be required, to blend the best of both worlds.

Asset management in this new world becomes complex, IT asset management system and fixed asset systems may need to further interface with operational asset systems, (IBM’s Maximo being a well-known example of an operational asset system, as opposed to an IT asset management system). When does a server go into which system? This blog has focused on this problem in detail previously for just the IT asset and fixed asset problem (as well as with CMDB and monitoring tools), but adding yet another class of asset systems makes the matter even more complicated.

Cisco has proposed the idea of “fog computing,” where distributed IoT devices will increase in power and storage. Given the cost and difficulty of moving large quantities of data, moving more processing power to the edge seems a likely scenario, but what does that mean concretely? It means more assets in more wiring closets and remote data centers – more to track, secure and maintain.

Ultimately, the asset reconciliation problem becomes so complex that it requires analytics. Questions such as, “Who owns that?” and “Who’s responsible?” are challenging enough, and even more so when you are uncertain of your data. Continuous consolidation and rationalization of these devices will be required for secure and efficient operations. Blazent has unique, industry-leading algorithms for solving these hard problems that are fast approaching with the Internet of Things.

While IT may be the master of asset management, evolving this function to the needs of IoT endpoints means there will be new asset classes and data-streams to aggregate.

The two faces of the Internet of Things in the Data center

The ascension of the Internet of Things (IoT) will have two profound effects on data center operations. First, IoT techniques will continue to increase their utility for core data center operational processes, and second, broader use of IoT-enabled infrastructure will challenge data center capacity and capabilities.

Servers and their related infrastructure have been at the forefront of IoT management for a considerable period of time. In fact, a good way to understand IoT potential is to look at the capabilities of a modern managed server. These capabilities extend well beyond just a CPU, memory and input/output ports. IoT-based management components in a modern computer include items such as:

  • Temperature sensors
  • Fan speed sensors
  • Security actuators (e.g. thumbprint scanners)
  • Power sensors
  • Moisture sensors

These components also include a broad range of other sources of telemetry data. IT monitoring systems collect and aggregate this data, and when events and actions become questionable, the system automatically issues support tickets to trigger a service call. In support of this, IoT techniques continue to evolve in the data center; examples include racks in which servers are installed became available as “managed” devices years ago,  or power distribution units and uninterruptible power supplies which communicate over the network to raise alerts when power is interrupted or battery health degrades.

With the increasing heat load of modern data centers, monitoring and managing ambient temperature becomes more and more critical. Temperature and humidity sensors are deployed throughout the facility and used to enable a “smart HVAC” response that can target cooling to specific locations.

Moisture sensors trigger alerts in the case of leaks or increasing humidity, and again can trigger either an automated HVAC response or direct human intervention, while motion sensors routinely feed security systems. Smoke and heat detectors are, of course, required by regulation and are integral parts of any data center’s IoT infrastructure.

The overall power efficiency of a facility can be aggregated from the electrical consumption of both the direct computing infrastructure as well as the mechanical systems, which is then compared to industry baselines and used to drive continual improvement. Managed services providers, in particular, stand to benefit from the ongoing application of IoT management in their facilities, and its support for efficiency and economies of scale.

In many ways, the IoT is a massive expansion of these kinds of practices. The whole world becomes the data center, with sensors embedded in everyday items and industrial infrastructure. Even in the data center, IoT-based management consumes capacity and the ability to monitor heat, humidity, moisture, smoke, power and so forth requires network bandwidth, processing power and storage. When the whole world is being managed this way, IT consumption increases massively. With sensors deployed everywhere, from utility grids to medical devices to supply chains, what might be a few million data points in the modern data center quickly becomes trillions.

With these conditions, network bandwidth, compute power and storage requirements all increase exponentially. Switching and routing infrastructures will require upgrades, especially closer to the network edges where the majority of the IoT lives. Storage systems will be subject to similar loads, requiring petabytes of storage; and because data formats are so variable and logging is so write-intensive, structured relational databases are avoided in favor of NoSQL solutions better suited to log aggregation.

IoT data management practices will need a rigorous design for aggregating, archiving and purging; otherwise the costs of storage will exceed the IoT economic benefits. Expect regulators and lawyers to scrutinize IoT data from records management and litigation perspectives, while, as always, privacy advocates are already raising concerns.

Moving and storing data is pointless unless it is applied to achieving business value. This implies new and evolving analytics algorithms, and the platforms capable of running them. Such analytics may be run in batch across already stored data, or in more advanced forms would be based on complex event processing, where high volume data streams are analyzed in real time.

These kinds of applications are still cutting-edge and the subject of advanced research, including related topics, such as machine learning and neural networks. All such algorithms must be based on clear economic value, such as predicting and preventing outages, improving customer outcomes (boosting Net Promoter Scores) or increasing supply chain efficiency.

As endpoint counts spike inside and outside the data center, understanding their origins and rationalizing them to known inventories become ever more important. Blazent’s powerful analytic engine is ideally suited to manage and drive value from the Internet of Things’ accelerating growth; this is an area in which we invest continuously, and is an area to which our customers and prospects need to pay very close attention.

Uncovering the Internet of Things

The next few posts in this blog will focus on the Internet of Things. Often abbreviated as IoT, the Internet of Things is a term for the massively interconnected webs of sensors and actuators increasingly embedded in almost every personal, commercial and industrial device, appliance, tool, sub-component,  etc. These sensors can be used in applications as diverse as:

  • Detecting fractures in railcar wheels
  • Monitoring and reporting the temperature in poultry barns
  • Observing patients’ vital signs
  • Recording and transmitting security camera feeds

While remote monitoring – a.k.a. telemetry – is not new, the Internet of Things is distinguished by scale and pervasiveness, and the use of Big Data techniques to analyze the resulting flood of information. IoT promises a more dynamic and adaptive digital ecosystem, as everyday devices become accessible, and therefore “smart.”

The potential upside to an Internet of “Things” is vast, and is likely to provide an inflection point that will change the direction of human culture (similar to the earlier inflection points of the Internet, mobility and social media; IoT is equally disruptive, but on a much bigger scale).

Companies are looking to IoT to improve the customer experience, boost insight into operations and supply chains, enhance security and control of key assets, provide warnings and alerts of infrastructure malfunction or other risk events, and much more. IoT also promises to blur further the lines between traditional “IT” concerns and broader operational issues – what’s been called the convergence of “IT” (information technology) and “OT” (operational technology).

For example, a server or a router in a data center is usually understood to be an “IT” concern. How does that relate, however, to a massive bulldozer, or even just a railroad engine wheel, that is now an Internet end-point? A truck, one of many in a fleet? Who manages the information associated with a vast range of what had previously been “dumb” devices? Responsibility for the physical node (e.g. a railcar wheel) will probably remain with its associated operational group, but it will be more and more dependent on IT services for base networking and very possibly data aggregation and analytics platforms associated with the specific end-point.

Service dispatch and response may be managed through IT-based ticketing systems, as they often have the highest maturity in a given organization. All of this is already happening; for example, a major refinery operator automatically enters a help desk ticket when a sensor on one of its pipelines indicates an unexpected decrease in pressure flow. Not your traditional IT service request, but it works, and works well. Because of this type of expanded monitoring and reporting capability, the traditional boundaries between functional silos are already starting to disappear.

While IoT applications will increase the need for networking infrastructure and services, the voluminous nature of the data will also challenge business processes, which must accept increased uncertainty and error. When an “error” in an IoT feed might indicate a security risk, the best approach will be challenging to define. High quality and well-engineered data quality analytics, such as Blazent provides, will be an essential part of the new operating model.

The data center itself will continue to increase its automation and sensing. Racks, power distribution units and data center environmental controls will all continue to increase their level of sensing and automation. Automatic location of physical assets has long been sought in large data centers, and technology is now making this possible. Reconciling such ongoing data back to the supply chain processes by which assets are acquired will also continue to mature.

Software licensing may also be affected. If an IoT solution involves enterprise software providers, there may be significant ramifications. Previous posts have mentioned that virtualization has resulted in unexpected costs when licensing is poorly understood. Will the likes of SAP or Oracle start to charge for their core, mission-critical software on the basis of how many IoT nodes a company is managing? Given their previous history, it seems prudent to assume that these companies will seek to maximize any such opportunity for revenues. If your enterprise could be targeted for this revenue increase (and chances are it is), best to be prepared, especially knowing that unmanaged IoT nodes are also a security risk.

The convergence of IT and OT will further complicate a topic presented previously in this blog: the relationship between IT asset management and corporate fixed asset management. Systems, data and process will all be affected. A common IoT use case might appear similar to well-known ITSM practices, including:

  • Connecting to a new asset (e.g. an industrial robot)
  • Defining the data flow from it, including establishing baselines for “normal”
  • Predicting faults and failures
  • Dispatching response
  • Enabling incident resolution and problem root cause

Improving fault or incident management is a key goal of IoT initiatives. The lowest maturity is to respond and restore service. Higher maturity prevents specific outages, and the highest maturity predicts scenarios that might lead to outages.

Future posts in this blog series will cover:

  • Instrumentation and IoT in the data center
  • IoT for improved asset and configuration management
  • Software licensing and the Internet of Things
  • The convergence of information technology with operational technology
  • Securing the Internet of Things