Integrated Data is the Key for State Agencies to Become More Customer-centric

During the past few years, there has been considerable discussion and media attention about the impact of digital natives (Millennials and Generation Z) in business and culture – specifically the younger generations’ expectations of how technology enables human interactions. These shifts in expectations aren’t isolated to private-sector business, state and local government agencies are being similarly challenged to think differently about technology and how it is used to modernize the relationships between citizens and their governments.

For a long period of time, state agencies have been built on a foundation of bureaucracy, process and structure, imposing governmental culture and value systems on the citizens and organizations that interact with them. The impact of this is not only in the inherent inefficiencies that have been created, but also in the steadily increasing governmental costs associated with providing service. Fortunately, the environment is changing. Government agencies are increasingly looking to private industry as an example of modern customer-centric interactions and the internal capabilities needed to enable them.

While these are fundamentally people-and-process interactions, they cannot take place in the modern environment without technology and data. As a result, it is no surprise that state IT functions find themselves at the center of their organizations’ transformation efforts. Government CIOs, in turn, are adopting the operating models, strategies, processes and technologies of their business counterparts to address their organizations’ needs.

State IT organizations have been some of the strongest proponents of IT service management, enterprise architecture and data governance standards. While it may appear that these approaches perpetuate the bureaucratic mindset, in reality, they establish a framework where the lines between government/private industry can be blurred, and citizens can benefit from the strengths of government organizations in new and innovative ways.

The modern state IT organization is:

Transparent – Service-centric approaches to state IT are enabling agencies to leverage an ecosystem of public and private partners to support their organizational mission. Simultaneously, data and processes are integrating for transparency, while harvesting insights that support continuous improvement.

Responsive – With ITSM process improvements, state IT organizations are not only capable of being more responsive to the needs of citizens and staff, but also to changes in the technology, legislative and business environments. Structured operations and high-quality data make it easier to identify and address changes and issues.

Connected – Enterprise Architecture is enabling agencies to maintain a trustworthy core of information needed to support decision-making and integrate that core with cutting-edge technology capabilities, such as IoT, geospatial data, operational technology and mobile devices to enable connected insights on-par with private-industry initiatives.

State processes have always been data-centric – collecting, processing and analyzing information to support the agency’s charter. Recently, however, the interpretation of this charter has changed to include a stronger focus on the efficient use of resources and the effectiveness of the organization in making a positive impact on its served community. While standards provide a framework for transparency, responsiveness and connectivity, achieving success relies strongly on implementation. How IT systems are implemented, both internally to the organization and in conjunction with the broader ecosystem of public and private partner organizations, is critical for determining whether the organization’s charter can be effectively fulfilled in the context of modern interactions and under the present-day cost constraints.

Blazent is a recognized industry leader in providing IT data quality solutions – helping companies, governments and state agencies receive more value from their existing IT systems investments by making data more complete and trustworthy. As your agency looks to leverage emerging technology to improve operations, harness the experience of service-provider partners and engage with citizens in modern and innovative ways, Blazent has the solutions and experience working with state governments and their agencies to streamline operations and literally generate millions in savings.

How Data-Integrity Technology Can Help Create a Patient Health Timeline

For healthcare providers to deliver the best diagnosis and treatment for their patients, the data on which they rely must be of the highest quality, completeness and trustworthiness. It must also accurately reflect how the patient’s health has changed during a period of time. One of the goals of health reform and digital medical records efforts during the past decade has been enabling the creation of unified medical records. This “ patient health timeline ” would be a complete digital chronology of the patient’s lifetime medical history (including symptoms, test results, diagnosis, provider notes and treatment activities) that providers can use when treating the patient.

An ambitious goal, the “patient health timeline” has been a difficult vision to realize due to the volume and fragmentation of patient health records – some of which have been digitized and some still reside in paper form only. Fortunately, for patients younger than 20, the majority of their health data exists in a digital form that can eventually be integrated. There are a number of technical challenges which presently exist that are currently preventing the realization of the “patient health timeline,” but data-quality management technology is rapidly helping companies to overcome some of them.

Fragmentation: Health records for a single patient are spread across the systems of a number of healthcare providers, insurance companies, pharmacies, hospitals and treatment centers. Each of these systems is unique, with no standard means of integrating patient data. Properly contextualizing data through an accurate set of relationships is key to establishing the integrity of integrated data from different sources.

Accuracy: There are portions of a patient’s health record which are relatively static throughout their lifetime (family medical history, allergies, chronic conditions and demographic data) and other portions that change with the patient’s health status and general aging (height/weight, reported symptoms, diagnosis and treatments, mental state, etc.). For the static portions (e.g., profile information), provider records often contain conflicting data, which must be reconciled and validated for accuracy. For the portions of the health timeline that change during a period of time, identifying accurate cause-and-effect relationships among data items is key to creating the actionable insights that providers need. Data validation and reconciliation technology can help companies both resolve data conflicts and identify relationships within data.

Patient Privacy: Regulations require patients to grant specific authorization for the use and sharing of personal health records. Compiling the patient health timeline would require the patient to grant authorization for the data to be integrated, for the use of the timeline data after it is compiled and to allow them to revoke authorization for specific data points or sets during the future. The patient health timeline, therefore, must be assembled in a structured and managed way that would enable disassembly in the future. Data quality management technology can help enable this transformation, so patients and providers know that the health data is both trustworthy and private.

Blazent is the leader in Data Quality, helping organizations drive downstream value by validating and transforming data into actionable intelligence. Nowhere is actionable intelligence needed more than in improving the quality of peoples’ health and wellness. How and when the patient health timeline will become a reality is still to be determined, but it is clear that data-integrity technology will be a critical component.

Your current discovery tool is not giving you what you need

Here is what you need to know

Your current discovery tool is not up to the task – period. Discovery tools are very good at performing the task for which they were designed – discovery. They are designed to look into a defined environment to identify, inventory and classify known types of objects and their direct connections and dependencies. What they don’t do is combine the data about the discovered objects with all of the other data about your environment to give you the big-picture perspective needed to enable effective decision making. This is a core requirement for nearly every business with which we’ve recently spoken, and the existing discovery tools in use are consistently insufficient for the task.

Your Discovery Tool is part of the problem

Discovery tools are contributing to your big-data problem by making data collection easier and faster, but failing to help you interpret that data and convert it into actionable information insights. Here are 5 ways your current discovery tools are giving you what you think you need, but are actually incapable of fulfilling your true needs.

  1. Discovery tools can indicate what is present, but not how it is used. A full appreciation of your IT environment requires an understanding of both its content (the objects within the environment) and context (the activities taking place). Discovery tools do a good job of capturing the content, within specific parameters (that is, discovery tools are often optimized to discover specific classes of CIs). Contextualization, however, often involves correlation of multiple discovery systems and integration with, for example, known business processes, job functions, etc. This operational context is critical to convert data into actionable insights.
  2. Discovery tools can’t indicate what was intended, only what is operating. Most IT environments were not created based upon a “grand-design,” but are the result of an evolutionary process over a number of years. Discovery tools can provide valuable insight into the objects that are operating in the environment today, but they are incapable of capturing those objects that may have been present during the past and the impact their legacy is having on the present. Fragmented implementations, historical technology limitations and design decisions are all likely responsible for why your environment is the way it is today. Historical perspective and intent cannot be captured by only looking at the present environment, and yet is critical as a basis for informed decisions about the future.
  3. Discovery tools can’t indicate what is missing, only what is present. Because discovery tools provide a single point of view of the operating environment, they can’t capture objects that are expected to be present in the environment, but for whatever reason are no longer present. Absence of data creates a gap that discovery tools are incapable of processing, which means the completeness of data is now at constant risk. Reconciling overlapping data sets from multiple sources/points of view is required to answer the completeness question.
  4. A single discovery tool won’t capture your entire environment; you will need many. Discovery tools are designed to look for a discrete set of objects of known characteristics, which means discovering tools are adept at finding and inventorying those specific classes of objects. With the diversity of modern IT ecosystems, it is practically impossible for a single discovery tool to understand and process all of the object types that are present. Some discovery tools are very good at capturing physical objects, while others capture software, etc. Discovery tools should be treated as specialists and the discovery process as a team sport – leveraging the capabilities of multiple players.
  5. Discovery tools don’t react to unknown objects very well. They are great for automating the discovery of things that are well known and easily identifiable, but nuanced variation among objects and the capture of new object types may require manual activities and/or additional tooling. At their core, discovery tools are rules-based systems. As the definitions of the rules improve, discovery tools will have the capability to capture and classify a greater percentage of the objects in the environment, but there will always be exceptions. These exceptions are often the most meaningful for providing environmental insights.

An important part of modern IT management

Discovery tools are important in the modern management of IT environments; however, independently they are clearly incapable of satisfying the overall need of most organizations. To gain the most value from discovery tool investments, companies must look at how they use multiple discovery tools together to provide a broad and holistic perspective on the environment.

As the data from discovery tools is integrated with known data from existing sources, integration, reconciliation of conflicts, addressing gaps and putting data into the correct context is critical. Blazent is an industry leader in providing the solutions needed to gather and focus on your discovery data and resolve the quality issues that are restraining you from achieving your goals of information insights and data-driven decision making.

Five key barriers to IT/OT integration and how to overcome them

Operational Technology (OT) consists of hardware and software that are designed to detect or cause changes in physical processes through direct monitoring and control of devices. As companies increasingly embrace OT, they face a dilemma as to whether to keep these new systems independent or integrate them with their existing IT systems. As IT leaders evaluate the alternatives, there are 5 key barriers to IT/OT integration to consider.

Business Process Knowledge – OT is an integrated part of the physical process itself and requires subject matter experts with both business process knowledge as well as technical skills related to the OT devices being used. IT staff members are often strong technologists, but lack the business and physical process expertise needed to support OT. Increasingly, companies are overcoming this challenge either through training manufacturing and operations staff on the technical skills or leveraging the use of a specialized partner to support the OT implementations.

Manageability & Support – OT systems are often distributed across geographic locations separate from the IT staff and are costly to connect to centralized management resources. This makes monitoring, control and the management of incidents and problems more difficult and costly. Remote management capabilities, advanced diagnostics, designed redundancies and self-healing capabilities built into the OT devices help overcome these challenges, at a price.

Dependency Risk – Two of the key challenges of enterprise IT environments are managing the complex web of dependencies and managing the risk of service impact when a dependent component fails or is unavailable. With traditional IT, the impact is typical to some human activity, and the user is able to mitigate impact through some type of manual activity. For OT, companies must be very careful managing the dependencies on IT components to avoid the risk of impacting physical processes when and where humans are not available to intervene and mitigate the situation. Since mitigation may not be an option with OT, controlling the dependencies is often the best approach to avoid impact.

Management of OT Data – The data produced by OT devices can be large, diverse in content, time sensitive for consumption and geographically distributed (sometimes not even connected to the corporate network). In comparison, most IT systems have some level of tolerance for time delays, are relatively constrained in size and content and reliably connected to company networks, making them accessible to the IT staff for data management and support. With OT systems, a company will need to decide whether the integration of the data into the overall enterprise data picture is necessary or whether the data created by the OT system can be left self-contained locally.

Security – IT systems are a common target for malicious behavior by those wishing to harm the company. The integration of OT systems with IT creates additional vulnerability targets with the potential of impacting not just people and data, but also physical processes. Segmentation of IT and OT systems to prevent cross-over security vulnerabilities as well as targeted security measures for the OT systems can help companies mitigate this risk.

Operational Technology has an important role in manufacturing and operations automation and in enabling the digital enterprise. As IT leaders and their organizations learn to embrace this technology in the overall IT landscape, there are some key decisions that must be made about where to integrate with existing systems and how to do it in a way that mitigates cost and risk. The impact of OT on IT cannot be underestimated.

Answers to 8 essential questions about assets that should be in your CMDB

Your Configuration Management Database (CMDB) is continuously increasing in size and complexity, driven by an endless list of components that need to be improved or new data sets that someone wants to add. You understand that more data doesn’t necessarily translate into more value. You wish someone could tell you, “What data do I actually need in my CMDB?” We can answer that question, and do it pretty precisely. At the core of any CMDB are the Asset/Configuration Item (CI) Records. Here are the answers to 8 essential questions about assets that are important to manage the IT ecosystem, and should be in your CMDB.

1. What are they? An accurate inventory of what assets and configuration items exist in your IT ecosystem is the foundation of your CMDB. Your asset/CI Records may come from discovery tools, physical inventories, supplier reports, change records, or even spreadsheets, but whatever their origin, you must know what assets you have in your environment.

2. Where are they? Asset location may not seem relevant at first, but the physical location of hardware, software and likely infrastructure impacts what types of SLAs you can provide to users, the cost of service contracts with suppliers and, in some areas, the amount of taxes you are required to pay. In many organizations, the physical location of assets is only captured as the “ship-to address” on the original purchase order; however, good practice dictates that you should update this information frequently. Some options may be GPS/RFID tracking, change records, physical inventory or triangulation from known fixed points on a network.

3. Why do we have them? Understanding the purpose of an asset is the key to unlocking the value it provides to the organization. Keep in mind that an asset’s purpose may change during time as the business evolves. The intended purpose when the asset was purchased may not be the same as the actual purpose it is serving today. Periodic review of dependencies, requests for change and usage/activity logs can help provide some insights into an asset’s purpose.

4. To what are they connected? Dependency information is critical for impact assessment, portfolio management, incident diagnosis and coordination of changes. Often, however, asset dependency data is incomplete, inaccurate and obsolete – providing only a partial picture to those who use the data for decision-making. When capturing and managing dependency data, it is important to keep in mind that the business/IT ecosystem is constantly evolving (particularly with the proliferation of cloud services), causing dependencies to assume important time attributes.

5. Who uses them? User activities and business processes should both be represented in the CMDB as CIs (they are part of your business/IT ecosystem). If not, then you are missing a tremendous opportunity to leverage the power of your ITSM system to understand how technology enables business performance. If you already have users, activities and processes in your CMDB, then the dependency relationships should frequently be updated from system transaction and access logs to show actual (not just intended) usage.

6. How much are they costing? Assets incur both direct and indirect costs for your organizations. Some examples may include support contracts, licensing, infrastructure capacity, maintenance and upgrades, service desk costs, taxes and management/overhead by IT staff. Understanding how much each asset is costing may not be easy to calculate, but this becomes the component cost for determining the total cost of providing services to users.

7. How old are they? Nothing is intended to be in your environment forever. Understanding the age and the expected, useful life of each of your assets helps you understand the past and future costs (TCO) and inform decisions about when to upgrade versus when to replace an asset. Asset age information should include not only when the asset was acquired, but also when significant upgrades/replacements occurred that might extend the expected, useful life of the asset to the organization.

8. How often are they changing? Change requests, feature backlogs and change management records provide valuable insights into the fitness of the asset for use (both intended use and incidental). This information should be available from other parts of your ITSM system (change, problem management, portfolio management, ), but it is critical that current and accurate information about change be considered a part of your asset records.

You should be able to find the answers to these 8 essential questions about assets in your CMDB. If you can’t, then you may have problems with either your data integration or asset data quality. If that is your situation, then Blazent can help. As industry leaders in data quality management solutions, Blazent can help you gather data from a broad set of sources and, through integration, correlation, reconciliation and contextualization, improve the quality of the core asset records in your CMDB, so you can maximize the decision-making value from your ITSM investments.

The future is closer than you think. Data is coming (and fast), how will you manage it?

What will you do when your job and the future of your company hinges on your ability to analyze almost every piece of data your company ever created against everything known about your markets, competitors and customers – and the impact of your decision will determine success or failure? That future is closer than you think. Data on an entirely different level is coming, and much faster than anyone realizes. Are you prepared for this new paradigm?

  • Technologists have been talking about “big-data” as a trend for more than a decade and that it is coming “” “Soon” is now in your rear-view mirror.
  • Companies have been capturing and storing operational and business process data for more than 20 years (sometimes longer), providing a deep vault of historical data, assuming you can access it.
  • IoT is leading to the creation of a massive stream of new operational data at an unprecedented rate. If you think volumes are high now, you’ve seen nothing yet.
  • The free flow of user-generated (un-curated) information across social media has enabled greater contextual insights than ever before, but concurrently the signal-to-noise ratio is off the charts.

What does all this mean? It means big data is already driving everything we do. The analytics capabilities of IT systems are becoming more sophisticated and easier for business leaders to use to analyze and tune their businesses. For them to be successful and make good decisions, however, the data on which they rely must be trustworthy, complete, accurate and inclusive of all available data sets.

Delivering the underlying quality data that leaders need is no small feat for the IT department. The problem has transformed from“ not enough data” to “too much of a good thing.” The challenge facing most organization is filtering through the noise in the data and amplifying the signal of information that is relevant and actionable for decision-making.

Inside your organization, historical data may be available in data warehouses and archives, but is likely fragmented and incomplete, depending on the source systems in use and the business processes being executed when the data was created. As IT systems have matured and more business and manufacturing processes are automated, the amount of operational (transactional) data created has increased. As part of stitching together business processes and managing changes across IT systems, data is often copied (and sometimes translated) multiple times, leading to large-scale data redundancy. IoT and OT are enabling instrumentation and the collection of an unprecedented volume and diversity of new operational data (often available in real-time) for operational monitoring and control, but the data from these collectors may have a limited useful life.

Outside your organization lies a wealth of contextual data about your customers, competitors and markets. During the past, experts and journalists published information for accuracy and objectivity – providing a baseline expectation of data quality. During the age of social media, a large volume of subjective user-generated content has replaced this curated information. Although this new content is lacking the objective rigor of its curated predecessors, the value associated with quality has been replaced with an exponential increase in data quantity and diversity available for consumption. The challenge for business leaders is filtering through the noise of opinion and conjecture to identify the real trends and insights that exist within publicly available data sets.

For you to make the big decisions on which the future of your company depend, you must be able to gather all of the available data – past, present, internal and external – and refine it into a trustworthy data set that can yield actionable insights, while simultaneously being continuously updated.

Machine Learning is re-inventing Business Process Optimization

Machine Learning is a game changer for business process optimization – enabling organizations to achieve levels of cost and quality efficiency never imagined previously. For the past 30 years, business process optimization was a tedious, time-consuming manual effort. Those tasked with this effort had to examine process output quality and review a very limited set of operational data to identify optimization opportunities based on historical process performance. Process changes would require re-measurement and comparison to pre-change data to evaluate the effectiveness of the change. Often, improvement impacts were either un-measurable or failed to satisfy the expectation of management.

With modern machine-learning capabilities, process management professionals are able to integrate a broad array of sensors and monitoring mechanisms to capture large volumes of operational data from their business processes. This data can be ingested, correlated and analyzed in real-time to provide a comprehensive view of process performance. Before machine learning, managing the signals from instrumented processes was limited to either pre-defined scenarios or the review of past performance. These limitations have now been removed.

Machine learning enables the instrumentation of a greater number of activities because of its capability to manage large volumes of data. During the past, process managers had to limit what monitors they set up to avoid information overload when processing the data being collected. Cloud-scale services combined with machine learning provide greater flexibility for process managers. They are able to collect data for “what-if” scenario modeling, as well as the training of the machine-learning system to “identify” relationships and events within the operational processes much more quickly than users are able to identify them manually.

One of the most promising potential benefits of machine learning is the “learning” aspect. Systems are not constrained to pre-defined rules and relationships – enabling them to adapt dynamically to changes in the data set from the business process and make inferences about problems in the process. These inferences can then be translated into events and incidents – potentially leading to automated corrective action and/or performance optimization of the process.

Even if companies are not ready to fully embrace machine-learning systems making decisions and taking actions without human intervention, there is tremendous near-term value in using machine-learning capabilities for correlation analysis and data validation to increase confidence and quality of data being used to drive operational insights. Manual scrubbing of data can be very costly and, in many cases, can offset (and negate) the potential benefits that data insights can provide to business process optimization. Machine learning is enabling higher quality insights to be obtained at a much lower cost than was previously achievable.

In business process optimization, there is an important distinction to be made between “change” and “improvement.” Machine-learning systems can correlate a large diversity of data sources – even without pre-defined relationships. They provide the ability to qualify operational (process) data with contextual (cost/value) data to help process managers quantify the impacts of inefficiencies and the potential benefits of changes. This is particularly important when developing a business justification for process optimization investments.

Machine learning is a true game changer for process optimization and process management professionals. Process analysis is now able to involve an exponentially larger volume of data inputs, process the data faster and at a much lower price point, and generate near-real-time insights with quantifiable impact assessments. This enables businesses to achieve higher levels of process optimization and be more agile to make changes when they are needed.

Optimizing Business Performance with People, Process, and Data

People are the heart and mind of your business. Processes form the backbone of your operations. Data is the lifeblood that feeds everything you do. For your business to operate at peak performance and deliver the results you seek, people, processes and data must be healthy individually, as well as work in harmony. Technology has always been important to bringing people, process and data together; however technology’s importance is evolving. As it does, the relationships among people, processes and technology are also changing.

People are the source of the ideas and the engine of critical thinking that enables you to turn customer needs and market forces into competitive (and profitable) opportunities for your business. The human brain is uniquely wired to interpret a large volume of information from the environment, analyze it and make decisions about how to respond. The human intellect, combined with passion and creativity, serves as the source for your company’s innovative ideas – both to create new and to improve existing products and operations. Ironically, most companies have historically viewed human resources as the “brawn” of their organization (workers), not the brains (thinkers).

Business and manufacturing processes provide the structure of your company’s operations – aligning the activities and efforts of your people into efficient and predictable workflows. Processes are critical to enable the effective allocation of the organization’s resources and ensure consistent and repeatable outcomes in both products and business functions. As companies mature, they develop the capabilities to improve operational performance by observing processes in action.

Operational data enables the people and process elements of your company to work together,  providing both real-time and historical indications of what activities are taking place and how well they are performing. Data is also the key enabler of scalability – multiple people are able to perform related activities independently and communicate amongst each other. Without data, separation of responsibilities and specialization of job roles would be almost impossible.

The relationships among people, process and data are changing. Since the first industrial revolution, processes were seen as the primary focus of businesses, with people serving as resources to execute those processes and data being created as a by-product of the work taking place. Technology adoption and the introduction of IT and manufacturing automation functions have primarily centered on the concept of business process automation – retaining the process focus and seeking to increase output and reduce costs through the elimination or streamlining of human activities. A new generation of business-process data enabled better monitoring and controlling of the processes to identify further opportunities for automation.

With the maturation of the information age, the benefits of investing in business-process automation are reaching a point of diminishing returns. Enterprises have been addressing those activities that could be easily and cost-effectively automated, and the majority of recapturable human resource costs involved in executing business processes have been harvested.

Optimizing business performance in the current business environment requires companies to re-think the relationship between people, process, data, and the technology that enables them. Forward-looking companies are transitioning to a data-centric perspective, viewing data as the strategic asset of the organization and framing people, process and technology as enablers to the creation, management and consumption of data. Re-framing the relationship in this way unlocks a new set of business optimization opportunities.

People are no longer viewed as workers to execute processes, but as interpreters of  environmental and operational data – making critical, time-sensitive decisions and continually adjusting activities to  improve business performance. Processes are no longer viewed as the rigid backbone to which all other parts of the organization must be attached, but, instead, become the source of operational data and the mechanisms for implementing change. In the modern paradigm, technology becomes more data-centric, capturing larger volumes and diversity of data elements and assisting humans to correlate them together to drive large-scale and real-time operational insights.

The ability of companies to fine tune their organization effectively for optimal business performance will be largely dependent on the quality and trustworthiness of the data assets they have at their disposal. Business processes have become more data-centric, and technology adoption has expanded the possibilities for new and diverse instrumentation. Bringing  all of the operational, environmental and strategic data sources together to enable decision making has become critical to business success.

Blazent’s service intelligently unifies disparate sources of IT and operational data into a single source that supports decision making and process refinement. While people and process are critical, it is not just the enabling data, but the quality of the data, that determines whether a company accelerates or stalls when pressure is applied. Blazent’s core role in the management of data quality has always served as a catalyst for growth and innovation.

IT under attack: Data Quality Technology helps companies assess security vulnerabilities

In the wake of the most recent (May 2017) malware attack impacting computer systems around the world, company executives are in urgent discussions with IT leaders, asking them to provide assessments of risks and vulnerabilities and recommendations to safeguard the company’s information and operations. CIOs and IT leaders strongly depend on the accuracy, completeness and trustworthiness of the data at their disposal to make informed decisions. How confident are you of the data being used to protect your organization from harm?

One of the biggest challenges for IT leaders is creating a dependable “big picture” view of their organization’s technology ecosystem and dependencies, because pieces of their operational data infrastructure are spread across a wide variety of technology management tools, ITSM systems, asset inventories, maintenance/patching logs and fragmented architecture models. While all of the pieces of the puzzle may be available, they don’t often fit together well (or easily) and data issues frequently appear, such as gaps, duplications, overlaps and conflicts, as well as problems with accuracy and out-of-date data. The result is a confusing picture for IT leaders and one that cannot be shared with company executives without raising concerns about the confidence of IT leaders’ recommendations and decisions.

If the quality of a decision is only as good as the data that is the basis for making that decision, then the solution to this problem seems simple: “Improve the quality of the data.” For most organizations, “simple” is not a term that applies to data management. Integrating IT data from multiple systems into a unified enterprise picture involves a series of complex steps; integration, validation, reconciliation, correlation and contextualization must all be performed to ensure the quality and trustworthiness of the information consumed. Unfortunately, most companies’ ITSM, Data Warehouse, Operations Management and even reporting systems lack the capabilities to effectively perform the unification steps necessary to ensure the required levels of data quality. This is where specialized Data Quality management technology is needed.

Consider for a moment where the IT operational data that relate to the latest malware attack resides – focusing on identifying the areas of vulnerability and assessing the potential extent of impact on the business. This latest attack was related to a known security issue with certain versions of operating systems in use both on end-user computer systems and some systems in data centers.

The asset records that identify the potentially impacted devices are typically found in an Asset Management system, Finance/Purchasing system, in network access logs or as configuration items in the CMDB of the ITSM system. Patching records that indicate what version of the operating system the devices are running may be found in a change management or deployment system (used by IT to distribute patches to users); asset management system (if devices are frequently inventoried); infrastructure management system (if the devices are in a data center); or in the ITSM system (if records are maintained when manual patching is done).

Once potentially vulnerable devices have been identified, IT staff and decision makers must understand where the devices are being used within the organization to assess the impact on business operations. For end-user devices, assigned-user/owner data is typically contained both in asset inventory records, IT access management/account management systems and server access logs. The user can be associated with a business function or department through HR records. For devices installed in data centers and other common locations, the ITSM system, purchasing records, asset inventories and/or architecture models can often be used to identify the relationships between the device and a business process or responsible department/function.

There are commonly at least 5 independent sources of data that must be combined to identify what devices are potentially vulnerable and what business functions depend on them. When these data sets are gathered, there will undoubtedly be a large number of duplicates, partial records, records for devices that have been retired or replaced, conflicting data about the same device and records with old data that is inaccurate. According to Gartner, at any moment, as much as 40% of enterprise data is inaccurate, missing or incomplete. Data quality technology can help integrate the data, resolve  the issues, alert data management staff to areas that need attention and help decision makers understand the accuracy and completeness of the data on which they depend.

Blazent has been a leader in providing Data Quality solutions for more than 10 years and is an expert in integrating the types of IT operational data needed to help CIOs and IT leaders assemble an accurate and unified big picture view of their technology ecosystem. With data quality and trustworthiness enabled by Blazent’s technology, your leaders and decision makers can be confident that the information they are using to assess vulnerabilities and risks will lead to solid recommendations and decisions that protect your organization from harm.

Machine Learning and the rise of the Dynamic Enterprise

The term “dynamic enterprise” was introduced during 2008, as an enterprise architecture concept. Rather than striving for stability, predictability and maturity, dynamic enterprises began focusing on continuous and transformational growth – embracing change as the only constant. This shift began with the proliferation of social media and user-generated (Web 2.0) content, which started to replace the curated information previously available. However, business and IT leaders in established enterprises did not fully embrace this trend; in fact, it was resisted for many years. These leaders were fearful of losing control of their organization’s information and technology assets.

Outside the direct oversight of the IT organization, now fueled by the mindset of a younger generation of employees, the shift has continued towards leveraging less accurate and subjective information (often crowd-sourced) to make business decisions. As organizations embraced larger volumes of less-accurate data, information began flowing more openly, changing the underlying nature of what was driving the dynamic enterprise. It wasn’t from where information was sourced, but rather how the large volumes of often-conflicting data could be organized, categorized and consumed. Big data was constraining the vision of dynamic enterprise.

As the data consumption trends evolved within the business environment, technologists (including Tim Berners-Lee, the inventor of the World Wide Web) were working behind the scenes on standards for a Semantic Web (Web 3.0), where computers could consume and analyze all of the content and information available. This enabling technology would bridge the divide between humans and their information and solve the big-data problem.

Making the data readable by computers was only part of the challenge. Most companies still lacked the technology capabilities and know-how to take advantage of the information at their disposal.  Advancements in machine learning and cloud infrastructure during the past 3 years have finally unlocked the potential of big data to the masses. A few large cloud service providers have invested in computing infrastructure and developed the capabilities to ingest and process vast quantities of data. They have analyzed, correlated and made it available to users in the form of cloud services that require neither the technical expertise nor the capital investment that were former barriers to adoption.

As more enterprises and individuals leverage machine learning to draw insights from data, those insights become part of the “learned knowledge” of the system itself, and help the computer understand context and consumption behavior patterns that further improved its capability to bridge the human-information divide. Computers are also able to detect changes in information consumption as an indicator of potential change in the underlying information, which is to say, data is no longer inaccurate or obsolete.

The dynamic enterprise is focused on continuous and transformative change. Until recently, the ability of humans to process information has limited the rate at which that change could take place. The maturation of machine learning and its accessibility to the mainstream business community has enabled enterprises to overcome this barrier and embrace what it means to be a truly dynamic enterprise. The challenge going forward will be to determine what questions to ask the computers and how to translate the newly available insights into business opportunities.

Blazent has helped our customers dramatically improve the effectiveness of their IT Service and Operations processes by improving the quality and integrity of the data upon which they depend. This provides a more stable basis for decision making, as well as providing insight into costs associated with both Service and Asset management.