Running towards the ITOM challenge

Continuous adaptability is what separates market leaders from cautionary footnotes. There is a rapidly accelerating confluence of deep, pervasive currents in the ITOM ecosystem, which itself is evolving very rapidly. Unless companies are willing to run towards these challenges head-on, they will never harness the waves of change on which they ride.

“It is not the strongest of the species that survives, nor the most intelligent. It is the one that is most adaptable to change.” —inspiration from Charles Darwin.

Major vendors such as HPE and ServiceNow recognize both the challenge and opportunity being presented and are aggressively moving towards providing more comprehensive and integrated end-to-end ITOM solution set, with the strategic intent of covering the entire range of operations and business methodologies integration. ITOM is comprised of disparate process elements that pivot off complex technology dependencies that all need to work seamlessly.

Having spent my entire career in this space, I’ve never seen technology challenges this complicated (which is what makes it fun and interesting). The high level ITOM challenge breaks down into three primary areas:

Development: Planning for new features or services delivered by ITOM needs to be triggered by an iterative feedback loop built into the process workflow. An increase in the number of entities entering the IT system (“things”) makes the ability to track and optimize new features while managing them at scale critical. This is where technology capabilities like Feature Flagging begin to show true value-add at the DevOps end of the process – enabling more rapid iteration. However, the challenge here is not only technical, but also cultural, since Sales and BizDev functions rarely interact with Development teams, a strong Product Marketing/Product Management function becomes an essential bridge across organizational divides.

Deployment: Software in development is very different from software being deployed. Once the software is out in the real world, mistakes have an impact that can not only affect perception, but the bottom line as well. This is another instance where Feature Flagging can be used to lower the risk of changes by enabling the deployment team to test out new features in a tightly controlled setting and quietly roll them back if there are any surprises.

Most importantly, this needs to be an integrated part of the entire end-to-end effort, this type of capability cannot operate in a siloed fashion. Sounds complicated? It is, and much more than you think when you factor in that most deployments are delivered in a hybrid environment. Cloud? On-premise? Virtual? Most deployments are likely to include all three and tracking performance across very different environments is likely to keep ITOM folks on their toes for a long time.

Operational optimization: Technology implementations out in the real world (that is, operational or production systems) aren’t static, they change. They are subject to stress and unplanned contingencies that are not normally part of the development environment or a controlled release process. This is where adaptability becomes essential for success. Any data that can be used to optimized performance and/or identify unexpected dependencies (which are even more likely to occur in an IoT- centric ecosystem) needs to have a tightly coupled feedback loop for subsequent iterations – particularly with development teams moving towards embracing a continuous release methodology.

Consider for a moment the dynamic end-to-end integrations that occur as companies embrace hybrid systems and open the floodgates toll allow IoT devices into their technology ecosystem. It is the adaptability of culture, processes and technology which will determine whether you are destined for success or at risk of being overwhelmed. Are you someone who run towards a challenge when others freeze in fear and run away? If so, then you will find ITOM an exciting and rewarding place to be these days.


The cold/warm embrace of IoT

What drives adoption of IoT at the consumer level? Presumably convenience (easy to set up and use), enablement (delivers promised value), and (depending on the product) cost.

At the top level, there are going to be two types of “things”. Those we see (or interact with directly), and those we don’t. The ones we see (e.g. on-person devices, home digital assistants, etc.) will actually require some work on the part of the end user before they deliver promised value (similar to setting up a wi-fi network), which means the manufacturer or developer has to deliver a no-brainer experience.

However, ease of set-up is not enough, companies offering IoT have to also focus on the on-going experience, particularly when things start to wobble (which they will). Any IoT device is essentially some form of hardware/software widget connected to a network, which means that nearly any error on the network, or the OS, or the app, or the device hardware could tip it over. The consumer will blame what they can see (as usual), and since the IoT ecosystem is inherently complex, it’s going to be very tough to pinpoint where the problem lies. The manufacturers will say its someone else’s fault (as they always have), the consumer will eventually get frustrated, and move to a different variant of the product. The burnout phase is long enough that consumers are likely to buy into ongoing marketing hype (e.g. “check out the latest and greatest drone ever!”), which means a steady stream of new customers, and therefor less incentive for the ecosystem suppliers to deliver a seamless customer experience for those who’ve already drunk the kool-aid in the visible portion of the IoT spectrum.

The IoT devices we don’t see will and are actually faring much better; these are embedded devices that are already prevalent in industrial applications, and have been steadily moving into the consumer space, but (importantly) do not require the consumer to actually do anything. A very cool (if somewhat over the top) example is the new navigation system on a Rolls-Royce. The IoT devices on board get road condition information continuously from a satellite feed and automatically adjust the transmission to the specifics of what is ahead, and this is not just adjusting for slippery driving conditions, its something as basic as downshifting for an upcoming a curve in the road. This is something that is always on, and always there. This is a great example of the enabling swarm of IoT devices that are designed to make our lives safer and easier (I know the example applies to a car that costs over $400,000, but the technology will work its way down, and probably pretty quickly).

So bottom line, if you can see it, expect complications and challenges. Whatever the device is will work most but not all of the time, because there is a human (you) in the loop. On the other hand, the best IoT device is the one you don’t know is there, it just works, and works all the time, making your life just a little bit easier. Embrace the first category (even though it may occasionally be tedious) while the second category embraces you, whether you know it or not.

IoT and Security – Same as always, only more so.

Chuck Martin over at MediaPost recently wrote an article with some interesting insights on consumer perspectives on IoT. While there are a boxful of statistics, the ones that jumped out were:

  • 14% of consumers surveyed think they are knowledgeable on IoT security
  • 90% think that security is something that should come standard with whatever they purchase.

So here is the issue. It’s easy to have top of mind awareness about security when you’re dealing with a device that is in your face (literally) all day. The primary operating context that most people think of for IoT is their phone (the average adult checks their phone over 200 times per day). It’s easy to recognize the risk associated with a compromised device that’s used hundreds of times per day, particularly when the device requires focus on the part of the consumer.

IoT is different. The whole point of IoT is that you don’t think about it. When was the last time you had a meaningful exchange with your toaster? Or the fuel injection system in your car? Technically your phone is a “thing”, but it’s a communications thing, an information thing, a thing that makes you feel naked if you don’t have it. So the other variable to consider is what is the class or level of thing that surveyed consumers are thinking about?

To paraphrase George Orwell, “all things are created equal, but some are more equal than others”. The more equal things are the things we use transactionally; phones, smart watches, smart glasses (if that ever manages to take off), the common thread here is these are things on our person. The less equal things, which far outnumber the more equal things, are the myriad sensors and RFID tags that are embedded in everything and now number in the multiple billions. Those are the things that are an invisible swarm in our daily lives, and the more of them there are, the more we’re going to come to depend on them. The more we depend on them, the juicier the target becomes for some nefarious jerk to try and mess with it. So it doesn’t even need to be something as overt as taking over a phone. It could be turning every stoplight in a city green at the same time, or causing a pipeline at a refinery to blow a gasket and trigger a “shelter in place” scenario, or shutting off smoke detectors in all public schools, the list is endless, and ranges from inconvenient to deadly.

And just to make it a bit more complicated, what happens when “things” stop working (assuming you even notice)? Right now, if, for example, your PC (also a thing) goes down, is it the PC, the OS, the app, your router, your WiFi, or your service provider? You can rest assured when you ask who’s at fault they will all point their fingers at anything but themselves, and this is an obvious example. What happens when a non-obvious thing goes off line or is compromised?

14% of consumers are knowledgeable about IoT security? I seriously doubt that. I have yet to meet a genuine expert in IoT security (just like I have yet to meet an actual “genius” at an Apple store), and I’ve been in this space since its inception.

There is a whole new level of cross-domain capability needed to address security for this type of technology ecosystem. The operating framework is so broad and complicated, this is going to fall into the purview of major technology companies who will then use it to push their own solution as a standard. Sound familiar?


IoT – Am I pleased or frightened?

IoT has become a very noisy space recently; driven by the fact that a significant part of the operating framework for the current wave of interest is being driven by consumer applications (e.g. wearables, smart homes, etc.). If you want the masses excited, you have to be the noisiest kid on the block, because everyone else who is trying to get their attention is doing the same thing.

While there is no doubt that consumer applications are pushing the B2C component to the front lines, the reality is that IoT has already been in play in the industrial world for quite a while.

One of the main differences between industrial IoT and consumer IoT is predictability. If a refinery has sensors set up on their pipelines to report flow and pressure, those numbers operate in a consistent range, and even when they don’t (e.g. a significant pressure drop), outliers are accounted for in the analytic app that processes incoming data. The main focus of integrating IoT into a production environment is optimizing workflow to whatever is driving business decisions (speed, volume, external deadlines, etc.). It is all predictable and consistent, which makes the management of the process complicated, but not difficult.

When this model is applied to a B to C environment, the variability of input goes off the scale almost immediately.  People may be creatures of habit, but when you have billions of little devices tracking every possible behavioral permutation for millions of consumers, the range of actionable scenarios goes up exponentially, and this assumes you have the technology to actually see what’s going on.

The real concern here is not necessary the analytics (complicated, not difficult), but with the security associated with how these devices all work (complicated and difficult). It’s apparently easy to hack into a server, or a PC, and now even a phone (and notice the devices hacked keep getting smaller).  The next step is hacking smart devices, which is to say, anything with an IP address.  Very soon manufacturers are going to start embedding chips in everything. Why? Because the cost curve is dropping sharply, and the value of the data they can extract will go up very quickly (knowing how a product is used let’s the manufacturer adjust both future development, as well as positioning for marketing purposes).

Because of all this, there is a strong built-in incentive to add this technology everywhere, which means that pretty much everything in our lives will become hackable. Having a toaster report usage back to the manufacturer is useful for them, but brings no direct benefits for the consumer who bought it. What is does, however, is allow external access to skilled douche-bags who can hack in and cause the toaster to overheat and catch fire. Multiply that by an installed base of millions and suddenly things get very ugly.

Should you be excited by the possibilities inherent in IoT ? Absolutely. We haven’t event begun to scratch the surface on this technology, and it’s already having a massive positive effect on our lives.

Should you be concerned about the possibilities inherent in IoT? Hell yes. No matter how clever engineers are in creating cool technology, there is someone equally clever at exploiting it for nefarious purposes.

What should we do? Embrace it.  Expect the occasional rough patch, just like any other ubiquitous technology, essentially keep calm and carry on.

IoT and the cheerfully oblivious customer

Functions within companies that service end-user needs have historically relied on their customer’s transactional data to determine interests, habits and use of existing products and services. The analysis is is often pretty straight-forward (the customer bought tires 4 years ago, at 12,000 miles per year, they’re ready for a new set), or e.g. collaborative filtering; if you suddenly and consistently start buying products for infants, they compare your profile to others who’ve previously done the same thing, and recommend whatever the other cohorts of your group have purchased.

All of this is fine, and it works reasonably well, but it is all based on a data set driven specifically by the end-user and their actions. Where its going to start to get much more interesting is when the network that surrounds that user begins to provide ancillary data that delivers a much more subtle context to what is going on, and more importantly, what is likely to happen next. Predictive analytics has been around for quite a while, but like most analytic models, it is primarily based on end-user behavior projected forward, rather than factoring in input from the overall environment in which that user is operating.

Now that our environment has evolved to become a transaction enabler, the applied examples of IoT become potentially endless. While IoT is likely to touch everyone everywhere, the deployment of the service depends on the implementation framework, for example:

Location framework: Anyone carrying a mobile device is already geo-location enabled, and the same thing applies to most cars built in the past few years. So how can IoT drive stronger customer experience in this context? BMW is deploying technology that is triggered when your car hits a wet spot on the road. Your car starts to hydroplane, the ABS systems lock in, skid avoided. But then your car sends a signal to all other BMWs in the area saying “this spot on this road is wet, engage ABS automatically as you approach.” A car that is one step ahead of you and focused on your safety is a strong competitive differentiator, and one that delivers a very high value customer experience, even when the customer has no idea its happening.

Temporal framework: Most food has a limited shelf life. The easiest example is milk; two weeks after you’ve bought it, best to not open the carton at all. Right now the only way to tell if milk is spoiled is the sniff test, which I’m pretty sure most people would prefer to avoid. RFID sensors can be embedded in the carton that are set to trigger a signal after the expiration date passes, and goes to a grocery list app that builds out as your shopping date approaches. No more rock/paper/scissors with your spouse to see who sniffs the milk carton. Same premise can be applied to medications as they approach their expiration date, or essentially anything you put in your body.

Network framework: disaster preparation and response has always been a major challenge. Even with a lengthy heads-up disaster (e.g. hurricanes) people are still caught flat-footed, government agencies struggle to cope, and life sucks a lot more that it should. A big part of the problem is knowing what is where when, and who needs what how quickly. Embedding sensors in all supplies (down to individual items) and having it tied to a visualization app can let relief agencies track exactly where critical supplies are (rather than shipping truckloads of unlabeled boxes, which need to be opened and catalogued on site, usually under sub-optimal conditions).

As mentioned in earlier posts, IoT technology is an incremental capability layer on top of supporting technology like internet and wireless. Because this capability has been around for so long and is so ubiquitous, people are rarely off the grid. Your day to day life is a series of network-enabled transactions; the average person checks their phone over 250 times per day (while for the average teenager, the number of times they check is beyond the scope of current mathematics). Knowing people are creatures of habit, and knowing these creatures are on the grid, provides an environment where everything can be tracked to deliver a far more holistic and engaging experience than we’ve experienced to date. This is particularly the case when things can now talk to each other on our behalf while we obliviously go about our daily grind.

IoT is by far the most pervasive change we’ve seen in the technology space since the introduction of the Internet itself. While this “always on, always watching” scenario can be unnerving for some people, most IoT data is targeted towards the end-users best interest, since the company delivering them is looking to expand their customer’s engagement by adding value to their daily routine. This is good for the provider, good for the end user, and will continue to expand exponentially as more and more devices interconnect.


How infrastructure convergence drives IoT

For those of us who have spent their careers in the technology space, having a long-term perspective helps to recognize cyclical shifts in the introduction of disruptive technologies. In the early days it was Darpanet (80s), which led to the Worldwide Web (90s), then mobility (late 90s/early 2000s), then social media (2004) and now IoT, which in sheer numbers dwarfs anything that preceded it.

All of these technologies are incremental and dependent (e.g. mobility without the internet is just a phone, social media without a mobile device keeps us in our room, etc.). Every time a new capability is layered on, the potential for innovative solutions expands exponentially (look at the vast cottage industry of mobile apps that depend on mobility layered on top of internet infrastructure). As has happened several times over the last few decades, we’re experiencing another seismic shift, and this one has a fundamental difference.

IoT (as its name implies) is about things. There may be billions of people, but there are literally trillions of things. Why does this matter? Think about the socialization of things, or what is also referred to as the network effect. The more people that are on Facebook, the more useful and compelling it becomes (a bunch of kids in their dorm rooms chatting on-line vs. one billion people on the app at the same time – is a completely different technology experience).

Now instead of thinking of a billion people on an app at the same time (which is pretty cool), think of a trillion + devices communicating with each other in a common framework across the globe. What is needed to have something like this come to pass?

IoT has device dependencies. You can enable an IP address on a device, but how that device uses this capability depends on its original purpose; is it a fitness device on your wrist, or a sensor node on an oil pipeline? Both report data, but in a completely different process context. The breakdown looks like this:

Process – All business depends on process; whether you’re tracking patient data, refining oil, or building a car, information flows from one point to another, with each step dependent on the data in the preceding step. In many cases IoT is a series of connected things with a common end-point objective. By adding an accessible IP component to an already established process, the level of visibility (and hopefully insight) that can be gleaned will have a massive effect on companies ability to fine-tune their processes in real-time.

Context – The context of data defines its use. While all data is potentially useful, some data is clearly more useful than other, and that is defined by context. Who is the end user and how important is the data to helping them achieve strategic objectives? Operational data that improves production and profitability is a superset of data that e.g. tells a line worker to adjust a valve. Same basic data, different context. Tying process to the end user is what defines context.

Tracking and reporting – The good news is we now have access to vast amounts of data. That’s also the bad news, the signal to noise ratio is about to go off the charts (in favor of noise). New analytics systems that are adept at pattern recognition on a fundamentally bigger scale are going to be critical to any long-term success in harnessing what IoT can deliver. The flip side of this is, what do you do with the data once you have it? Who is the end user, and what do they need to be able to accomplish? Data visualization has always been a compelling way to make the complex understandable. While technical people may intuitively understand complex data sets, the business folks who can write big checks are the ones that ultimately need to feel comfortable with what they see, and being able to track the right data, and report on it in a way that makes sense is critical to ensuring the successful use of IoT.

Quality management – Data that does not factor in the quality associated with it is useless. IoT is not only extending the sources of data entering any system, it is also introducing ample opportunities for redundancy, overlap, errors, etc. Because of the compounding nature of the network effect, any inconsistencies that are the result of data quality issues are going to be leveraged into a much bigger mess. Check your local news feed any day for a great example of this (Equifax, Target, Yahoo, the list is sadly endless). If companies can screw up on this level with their existing data sources, imagine what they can do when their sources are exponentially bigger.

Enabling infrastructure – Any disruptive technology tends to enter the market organically, since multiple vendors with multiple standards are vying to be the one ring to rule them all. The challenge with IoT is that the coverage model is so broad, that its going to force systems that were not designed to work together from an information perspective to begin to collaborate. Want a simple example? BMC and ServiceNow (both massive IT infrastructure providers) have been at each others throats for years. Some companies are BMC shops, others like ServiceNow, and many actually use both, but for complementary functions. As the range of data sources delivered by IoT enter the IT ecosystem, its going to create a forcing function across the industry. The question is, can big companies or the channels that enable them (which are also big companies) force everyone to just get along?

I have zero doubt this is going to work. The industry has been consistently capable of adopting wildly disruptive technology and eventually rolling it into something that works for the end user. It will require companies to adapt (to paraphrase Darwin, its not the strongest or smartest that survive, it’s the most adaptable). Some companies that rode the last wave will disappear, some will get gobbled up, and some will adapt and drive the system forward for years, creating a new crop of billionaires in the process.

Integrated Data is the Key for State Agencies to Become More Customer-centric

During the past few years, there has been considerable discussion and media attention about the impact of digital natives (Millennials and Generation Z) in business and culture – specifically the younger generations’ expectations of how technology enables human interactions. These shifts in expectations aren’t isolated to private-sector business, state and local government agencies are being similarly challenged to think differently about technology and how it is used to modernize the relationships between citizens and their governments.

For a long period of time, state agencies have been built on a foundation of bureaucracy, process and structure, imposing governmental culture and value systems on the citizens and organizations that interact with them. The impact of this is not only in the inherent inefficiencies that have been created, but also in the steadily increasing governmental costs associated with providing service. Fortunately, the environment is changing. Government agencies are increasingly looking to private industry as an example of modern customer-centric interactions and the internal capabilities needed to enable them.

While these are fundamentally people-and-process interactions, they cannot take place in the modern environment without technology and data. As a result, it is no surprise that state IT functions find themselves at the center of their organizations’ transformation efforts. Government CIOs, in turn, are adopting the operating models, strategies, processes and technologies of their business counterparts to address their organizations’ needs.

State IT organizations have been some of the strongest proponents of IT service management, enterprise architecture and data governance standards. While it may appear that these approaches perpetuate the bureaucratic mindset, in reality, they establish a framework where the lines between government/private industry can be blurred, and citizens can benefit from the strengths of government organizations in new and innovative ways.

The modern state IT organization is:

Transparent – Service-centric approaches to state IT are enabling agencies to leverage an ecosystem of public and private partners to support their organizational mission. Simultaneously, data and processes are integrating for transparency, while harvesting insights that support continuous improvement.

Responsive – With ITSM process improvements, state IT organizations are not only capable of being more responsive to the needs of citizens and staff, but also to changes in the technology, legislative and business environments. Structured operations and high-quality data make it easier to identify and address changes and issues.

Connected – Enterprise Architecture is enabling agencies to maintain a trustworthy core of information needed to support decision-making and integrate that core with cutting-edge technology capabilities, such as IoT, geospatial data, operational technology and mobile devices to enable connected insights on-par with private-industry initiatives.

State processes have always been data-centric – collecting, processing and analyzing information to support the agency’s charter. Recently, however, the interpretation of this charter has changed to include a stronger focus on the efficient use of resources and the effectiveness of the organization in making a positive impact on its served community. While standards provide a framework for transparency, responsiveness and connectivity, achieving success relies strongly on implementation. How IT systems are implemented, both internally to the organization and in conjunction with the broader ecosystem of public and private partner organizations, is critical for determining whether the organization’s charter can be effectively fulfilled in the context of modern interactions and under the present-day cost constraints.

Blazent is a recognized industry leader in providing IT data quality solutions – helping companies, governments and state agencies receive more value from their existing IT systems investments by making data more complete and trustworthy. As your agency looks to leverage emerging technology to improve operations, harness the experience of service-provider partners and engage with citizens in modern and innovative ways, Blazent has the solutions and experience working with state governments and their agencies to streamline operations and literally generate millions in savings.

How Data-Integrity Technology Can Help Create a Patient Health Timeline

For healthcare providers to deliver the best diagnosis and treatment for their patients, the data on which they rely must be of the highest quality, completeness and trustworthiness. It must also accurately reflect how the patient’s health has changed during a period of time. One of the goals of health reform and digital medical records efforts during the past decade has been enabling the creation of unified medical records. This “ patient health timeline ” would be a complete digital chronology of the patient’s lifetime medical history (including symptoms, test results, diagnosis, provider notes and treatment activities) that providers can use when treating the patient.

An ambitious goal, the “patient health timeline” has been a difficult vision to realize due to the volume and fragmentation of patient health records – some of which have been digitized and some still reside in paper form only. Fortunately, for patients younger than 20, the majority of their health data exists in a digital form that can eventually be integrated. There are a number of technical challenges which presently exist that are currently preventing the realization of the “patient health timeline,” but data-quality management technology is rapidly helping companies to overcome some of them.

Fragmentation: Health records for a single patient are spread across the systems of a number of healthcare providers, insurance companies, pharmacies, hospitals and treatment centers. Each of these systems is unique, with no standard means of integrating patient data. Properly contextualizing data through an accurate set of relationships is key to establishing the integrity of integrated data from different sources.

Accuracy: There are portions of a patient’s health record which are relatively static throughout their lifetime (family medical history, allergies, chronic conditions and demographic data) and other portions that change with the patient’s health status and general aging (height/weight, reported symptoms, diagnosis and treatments, mental state, etc.). For the static portions (e.g., profile information), provider records often contain conflicting data, which must be reconciled and validated for accuracy. For the portions of the health timeline that change during a period of time, identifying accurate cause-and-effect relationships among data items is key to creating the actionable insights that providers need. Data validation and reconciliation technology can help companies both resolve data conflicts and identify relationships within data.

Patient Privacy: Regulations require patients to grant specific authorization for the use and sharing of personal health records. Compiling the patient health timeline would require the patient to grant authorization for the data to be integrated, for the use of the timeline data after it is compiled and to allow them to revoke authorization for specific data points or sets during the future. The patient health timeline, therefore, must be assembled in a structured and managed way that would enable disassembly in the future. Data quality management technology can help enable this transformation, so patients and providers know that the health data is both trustworthy and private.

Blazent is the leader in Data Quality, helping organizations drive downstream value by validating and transforming data into actionable intelligence. Nowhere is actionable intelligence needed more than in improving the quality of peoples’ health and wellness. How and when the patient health timeline will become a reality is still to be determined, but it is clear that data-integrity technology will be a critical component.

Your current discovery tool is not giving you what you need

Here is what you need to know

Your current discovery tool is not up to the task – period. Discovery tools are very good at performing the task for which they were designed – discovery. They are designed to look into a defined environment to identify, inventory and classify known types of objects and their direct connections and dependencies. What they don’t do is combine the data about the discovered objects with all of the other data about your environment to give you the big-picture perspective needed to enable effective decision making. This is a core requirement for nearly every business with which we’ve recently spoken, and the existing discovery tools in use are consistently insufficient for the task.

Your Discovery Tool is part of the problem

Discovery tools are contributing to your big-data problem by making data collection easier and faster, but failing to help you interpret that data and convert it into actionable information insights. Here are 5 ways your current discovery tools are giving you what you think you need, but are actually incapable of fulfilling your true needs.

  1. Discovery tools can indicate what is present, but not how it is used. A full appreciation of your IT environment requires an understanding of both its content (the objects within the environment) and context (the activities taking place). Discovery tools do a good job of capturing the content, within specific parameters (that is, discovery tools are often optimized to discover specific classes of CIs). Contextualization, however, often involves correlation of multiple discovery systems and integration with, for example, known business processes, job functions, etc. This operational context is critical to convert data into actionable insights.
  2. Discovery tools can’t indicate what was intended, only what is operating. Most IT environments were not created based upon a “grand-design,” but are the result of an evolutionary process over a number of years. Discovery tools can provide valuable insight into the objects that are operating in the environment today, but they are incapable of capturing those objects that may have been present during the past and the impact their legacy is having on the present. Fragmented implementations, historical technology limitations and design decisions are all likely responsible for why your environment is the way it is today. Historical perspective and intent cannot be captured by only looking at the present environment, and yet is critical as a basis for informed decisions about the future.
  3. Discovery tools can’t indicate what is missing, only what is present. Because discovery tools provide a single point of view of the operating environment, they can’t capture objects that are expected to be present in the environment, but for whatever reason are no longer present. Absence of data creates a gap that discovery tools are incapable of processing, which means the completeness of data is now at constant risk. Reconciling overlapping data sets from multiple sources/points of view is required to answer the completeness question.
  4. A single discovery tool won’t capture your entire environment; you will need many. Discovery tools are designed to look for a discrete set of objects of known characteristics, which means discovering tools are adept at finding and inventorying those specific classes of objects. With the diversity of modern IT ecosystems, it is practically impossible for a single discovery tool to understand and process all of the object types that are present. Some discovery tools are very good at capturing physical objects, while others capture software, etc. Discovery tools should be treated as specialists and the discovery process as a team sport – leveraging the capabilities of multiple players.
  5. Discovery tools don’t react to unknown objects very well. They are great for automating the discovery of things that are well known and easily identifiable, but nuanced variation among objects and the capture of new object types may require manual activities and/or additional tooling. At their core, discovery tools are rules-based systems. As the definitions of the rules improve, discovery tools will have the capability to capture and classify a greater percentage of the objects in the environment, but there will always be exceptions. These exceptions are often the most meaningful for providing environmental insights.

An important part of modern IT management

Discovery tools are important in the modern management of IT environments; however, independently they are clearly incapable of satisfying the overall need of most organizations. To gain the most value from discovery tool investments, companies must look at how they use multiple discovery tools together to provide a broad and holistic perspective on the environment.

As the data from discovery tools is integrated with known data from existing sources, integration, reconciliation of conflicts, addressing gaps and putting data into the correct context is critical. Blazent is an industry leader in providing the solutions needed to gather and focus on your discovery data and resolve the quality issues that are restraining you from achieving your goals of information insights and data-driven decision making.

Five key barriers to IT/OT integration and how to overcome them

Operational Technology (OT) consists of hardware and software that are designed to detect or cause changes in physical processes through direct monitoring and control of devices. As companies increasingly embrace OT, they face a dilemma as to whether to keep these new systems independent or integrate them with their existing IT systems. As IT leaders evaluate the alternatives, there are 5 key barriers to IT/OT integration to consider.

Business Process Knowledge – OT is an integrated part of the physical process itself and requires subject matter experts with both business process knowledge as well as technical skills related to the OT devices being used. IT staff members are often strong technologists, but lack the business and physical process expertise needed to support OT. Increasingly, companies are overcoming this challenge either through training manufacturing and operations staff on the technical skills or leveraging the use of a specialized partner to support the OT implementations.

Manageability & Support – OT systems are often distributed across geographic locations separate from the IT staff and are costly to connect to centralized management resources. This makes monitoring, control and the management of incidents and problems more difficult and costly. Remote management capabilities, advanced diagnostics, designed redundancies and self-healing capabilities built into the OT devices help overcome these challenges, at a price.

Dependency Risk – Two of the key challenges of enterprise IT environments are managing the complex web of dependencies and managing the risk of service impact when a dependent component fails or is unavailable. With traditional IT, the impact is typical to some human activity, and the user is able to mitigate impact through some type of manual activity. For OT, companies must be very careful managing the dependencies on IT components to avoid the risk of impacting physical processes when and where humans are not available to intervene and mitigate the situation. Since mitigation may not be an option with OT, controlling the dependencies is often the best approach to avoid impact.

Management of OT Data – The data produced by OT devices can be large, diverse in content, time sensitive for consumption and geographically distributed (sometimes not even connected to the corporate network). In comparison, most IT systems have some level of tolerance for time delays, are relatively constrained in size and content and reliably connected to company networks, making them accessible to the IT staff for data management and support. With OT systems, a company will need to decide whether the integration of the data into the overall enterprise data picture is necessary or whether the data created by the OT system can be left self-contained locally.

Security – IT systems are a common target for malicious behavior by those wishing to harm the company. The integration of OT systems with IT creates additional vulnerability targets with the potential of impacting not just people and data, but also physical processes. Segmentation of IT and OT systems to prevent cross-over security vulnerabilities as well as targeted security measures for the OT systems can help companies mitigate this risk.

Operational Technology has an important role in manufacturing and operations automation and in enabling the digital enterprise. As IT leaders and their organizations learn to embrace this technology in the overall IT landscape, there are some key decisions that must be made about where to integrate with existing systems and how to do it in a way that mitigates cost and risk. The impact of OT on IT cannot be underestimated.