What your ITSM vendors don’t want you to know about Discovery Technology

Are you comfortable with how your IT Service Management (ITSM) processes address the volume, diversity and rate of change present in your IT environment? Comfortable enough to bet your job on it?

Most traditional ITSM systems and their associated offerings are structured around configuration management processes. These import lists of Configuration Items (CIs) from a variety of source systems, and then map the CIs together either manually or through available data. This has been the approach literally for decades, and has not kept pace with the breathtaking rate of IT change. As a result, this model severely limits the number of CI types that can be effectively managed in the Configuration Management Database (CMDB) and the rate at which environmental changes can be reflected. Current ITSM systems, by themselves, cannot keep up with this pace of change.

Configuration Management and the CMDB are the core of most IT Service Management processes. They serve as a hub to drive critical operational insights by representing the various pieces present within a business technology ecosystem and how the pieces relate to each other. Configuration data is represented using a series of lifecycle states including “as-intended,” “as-designed,” “as-implemented,” “as-operated” and “as-used.” Enterprise Architecture is typically concerned with the “as-intended” and “as-designed,“ with traditional ITSM processes and systems focused on the “as-implemented” state. Creating these states often involve significant manual efforts of inventory, mapping and modeling the components present in the IT environment.

What continues to be beyond the reach of most organizations are quality, dependable insights into how the ecosystem is actually implemented and how users and business processes are consuming various components. Ironically, it is these operational insights that organizations need most to identify cost reduction and/or value-creation opportunities.

The problem in nearly every instance is that IT depends on discovery tools to determine the breadth and depth of its service offerings. Discovery tools originated during the early 2000’s, which in Internet time may as well be a thousand years ago. The IT world has changed dramatically, and continues to do so at ever accelerating rates. The sheer volume of data entering any IT system is overwhelming, not to mention the variety and range of data sources. This entire scenario is also about to become much worse; it isn’t just that mobile technology is creating billions of entry points, but that the number is about to increase by trillions with the rise of sensor networks and the Internet of Things.

Discovery tools are meant to discover, they don’t address variables such as false positive, false negatives, lack of contextual references driven by fingerprinting capabilities, latency or granularity. These are the parameters that define the modern IT system, and the current range of discovery tools is not prepared for the task. This is why Gartner stated, “40% of an organization’s data is out of date or inaccurate” (sic). Discovery and the associated quality needed to be effective at providing a reliable referential framework are at the moment two different things.

The good news is that recent developments in the application of Data Quality Management (DQM) methods to Discovery technology are finally creating dependable snapshots of runtime operations of the ecosystem – which is a frightening prospect for many ITSM vendors. By applying DQM to Discovery, IT is able to identify CIs present in the operating environment, quickly classify them, and map dependency relationships based on both technical connections and time-based correlation of runtime events.

The value this hybrid technology offers IT organizations is tremendous. By eliminating the need for manual inventory and mapping of CIs, the Configuration Management system will now be capable of capturing and maintaining a larger volume and wider variety of ecosystem components, making this data available for more granular analysis. The addition of DQM to Discovery also supports a greater diversity of CI types by reducing the need for pre-defined classifications and structured relationships. CI types that are recognized can automatically be validated. In addition, previously undiscovered CI types can be captured and classified based on embedded data within the component as well as the context in which the component is being used within the environment.

Perhaps, the most important value of applying DQM to  Discovery (and a risk to ITSM vendors) are the capabilities to accommodate frequent changes in the operational environment (enabling the CMDB to be updated at the speed of business) and to discover relationships at runtime between users and technology components (enabling a more accurate, real-time view to be put into use). By having these combined capabilities, companies will have a much more accurate picture of what technology pieces exist within their environment, how they are being consumed and the value being created from them, including the cost of generating that value.

Most ITSM systems are consumed on a per-seat basis, and therefore the vendor’s revenue is directly tied to the number of human users – providing a disincentive to replace manual processes with automation. DQM with Discovery has the potential not only to eliminate many manual configuration management activities, but also during the future serve as a stepping stone towards automated event management, incident management and AI/Bot-based diagnosis and remediation activities (significantly reducing the need for human helpdesk agents). It is understandable that ITSM vendors don’t want their customers to know about Discovery and DQM technology. This is perhaps the single most disruptive technology threatening their future revenue potential.

Before committing to your ITSM vendors current Discovery capabilities, start asking the hard questions.

  • How do they contextualize data?
  • How do they address the batch disconnect?
  • How do they recognize false positives or negatives?
  • How do they track fingerprinting and account for it?
  • Etc.

Failing to ask these types of questions leaves you in the 60% club. If you were being graded, you’d earn at best a D- grade. Do you want to be a D student, or would you rather be an A student? The current batch of discovery tools is depressing an entire industry grade curve.