IoT – Am I pleased or frightened?

IoT has become a very noisy space recently; driven by the fact that a significant part of the operating framework for the current wave of interest is being driven by consumer applications (e.g. wearables, smart homes, etc.). If you want the masses excited, you have to be the noisiest kid on the block, because everyone else who is trying to get their attention is doing the same thing.

While there is no doubt that consumer applications are pushing the B2C component to the front lines, the reality is that IoT has already been in play in the industrial world for quite a while.

One of the main differences between industrial IoT and consumer IoT is predictability. If a refinery has sensors set up on their pipelines to report flow and pressure, those numbers operate in a consistent range, and even when they don’t (e.g. a significant pressure drop), outliers are accounted for in the analytic app that processes incoming data. The main focus of integrating IoT into a production environment is optimizing workflow to whatever is driving business decisions (speed, volume, external deadlines, etc.). It is all predictable and consistent, which makes the management of the process complicated, but not difficult.

When this model is applied to a B to C environment, the variability of input goes off the scale almost immediately.  People may be creatures of habit, but when you have billions of little devices tracking every possible behavioral permutation for millions of consumers, the range of actionable scenarios goes up exponentially, and this assumes you have the technology to actually see what’s going on.

The real concern here is not necessary the analytics (complicated, not difficult), but with the security associated with how these devices all work (complicated and difficult). It’s apparently easy to hack into a server, or a PC, and now even a phone (and notice the devices hacked keep getting smaller).  The next step is hacking smart devices, which is to say, anything with an IP address.  Very soon manufacturers are going to start embedding chips in everything. Why? Because the cost curve is dropping sharply, and the value of the data they can extract will go up very quickly (knowing how a product is used let’s the manufacturer adjust both future development, as well as positioning for marketing purposes).

Because of all this, there is a strong built-in incentive to add this technology everywhere, which means that pretty much everything in our lives will become hackable. Having a toaster report usage back to the manufacturer is useful for them, but brings no direct benefits for the consumer who bought it. What is does, however, is allow external access to skilled douche-bags who can hack in and cause the toaster to overheat and catch fire. Multiply that by an installed base of millions and suddenly things get very ugly.

Should you be excited by the possibilities inherent in IoT ? Absolutely. We haven’t event begun to scratch the surface on this technology, and it’s already having a massive positive effect on our lives.

Should you be concerned about the possibilities inherent in IoT? Hell yes. No matter how clever engineers are in creating cool technology, there is someone equally clever at exploiting it for nefarious purposes.

What should we do? Embrace it.  Expect the occasional rough patch, just like any other ubiquitous technology, essentially keep calm and carry on.

IoT and the cheerfully oblivious customer

Functions within companies that service end-user needs have historically relied on their customer’s transactional data to determine interests, habits and use of existing products and services. The analysis is is often pretty straight-forward (the customer bought tires 4 years ago, at 12,000 miles per year, they’re ready for a new set), or e.g. collaborative filtering; if you suddenly and consistently start buying products for infants, they compare your profile to others who’ve previously done the same thing, and recommend whatever the other cohorts of your group have purchased.

All of this is fine, and it works reasonably well, but it is all based on a data set driven specifically by the end-user and their actions. Where its going to start to get much more interesting is when the network that surrounds that user begins to provide ancillary data that delivers a much more subtle context to what is going on, and more importantly, what is likely to happen next. Predictive analytics has been around for quite a while, but like most analytic models, it is primarily based on end-user behavior projected forward, rather than factoring in input from the overall environment in which that user is operating.

Now that our environment has evolved to become a transaction enabler, the applied examples of IoT become potentially endless. While IoT is likely to touch everyone everywhere, the deployment of the service depends on the implementation framework, for example:

Location framework: Anyone carrying a mobile device is already geo-location enabled, and the same thing applies to most cars built in the past few years. So how can IoT drive stronger customer experience in this context? BMW is deploying technology that is triggered when your car hits a wet spot on the road. Your car starts to hydroplane, the ABS systems lock in, skid avoided. But then your car sends a signal to all other BMWs in the area saying “this spot on this road is wet, engage ABS automatically as you approach.” A car that is one step ahead of you and focused on your safety is a strong competitive differentiator, and one that delivers a very high value customer experience, even when the customer has no idea its happening.

Temporal framework: Most food has a limited shelf life. The easiest example is milk; two weeks after you’ve bought it, best to not open the carton at all. Right now the only way to tell if milk is spoiled is the sniff test, which I’m pretty sure most people would prefer to avoid. RFID sensors can be embedded in the carton that are set to trigger a signal after the expiration date passes, and goes to a grocery list app that builds out as your shopping date approaches. No more rock/paper/scissors with your spouse to see who sniffs the milk carton. Same premise can be applied to medications as they approach their expiration date, or essentially anything you put in your body.

Network framework: disaster preparation and response has always been a major challenge. Even with a lengthy heads-up disaster (e.g. hurricanes) people are still caught flat-footed, government agencies struggle to cope, and life sucks a lot more that it should. A big part of the problem is knowing what is where when, and who needs what how quickly. Embedding sensors in all supplies (down to individual items) and having it tied to a visualization app can let relief agencies track exactly where critical supplies are (rather than shipping truckloads of unlabeled boxes, which need to be opened and catalogued on site, usually under sub-optimal conditions).

As mentioned in earlier posts, IoT technology is an incremental capability layer on top of supporting technology like internet and wireless. Because this capability has been around for so long and is so ubiquitous, people are rarely off the grid. Your day to day life is a series of network-enabled transactions; the average person checks their phone over 250 times per day (while for the average teenager, the number of times they check is beyond the scope of current mathematics). Knowing people are creatures of habit, and knowing these creatures are on the grid, provides an environment where everything can be tracked to deliver a far more holistic and engaging experience than we’ve experienced to date. This is particularly the case when things can now talk to each other on our behalf while we obliviously go about our daily grind.

IoT is by far the most pervasive change we’ve seen in the technology space since the introduction of the Internet itself. While this “always on, always watching” scenario can be unnerving for some people, most IoT data is targeted towards the end-users best interest, since the company delivering them is looking to expand their customer’s engagement by adding value to their daily routine. This is good for the provider, good for the end user, and will continue to expand exponentially as more and more devices interconnect.

 

How infrastructure convergence drives IoT

For those of us who have spent their careers in the technology space, having a long-term perspective helps to recognize cyclical shifts in the introduction of disruptive technologies. In the early days it was Darpanet (80s), which led to the Worldwide Web (90s), then mobility (late 90s/early 2000s), then social media (2004) and now IoT, which in sheer numbers dwarfs anything that preceded it.

All of these technologies are incremental and dependent (e.g. mobility without the internet is just a phone, social media without a mobile device keeps us in our room, etc.). Every time a new capability is layered on, the potential for innovative solutions expands exponentially (look at the vast cottage industry of mobile apps that depend on mobility layered on top of internet infrastructure). As has happened several times over the last few decades, we’re experiencing another seismic shift, and this one has a fundamental difference.

IoT (as its name implies) is about things. There may be billions of people, but there are literally trillions of things. Why does this matter? Think about the socialization of things, or what is also referred to as the network effect. The more people that are on Facebook, the more useful and compelling it becomes (a bunch of kids in their dorm rooms chatting on-line vs. one billion people on the app at the same time – is a completely different technology experience).

Now instead of thinking of a billion people on an app at the same time (which is pretty cool), think of a trillion + devices communicating with each other in a common framework across the globe. What is needed to have something like this come to pass?

IoT has device dependencies. You can enable an IP address on a device, but how that device uses this capability depends on its original purpose; is it a fitness device on your wrist, or a sensor node on an oil pipeline? Both report data, but in a completely different process context. The breakdown looks like this:

Process – All business depends on process; whether you’re tracking patient data, refining oil, or building a car, information flows from one point to another, with each step dependent on the data in the preceding step. In many cases IoT is a series of connected things with a common end-point objective. By adding an accessible IP component to an already established process, the level of visibility (and hopefully insight) that can be gleaned will have a massive effect on companies ability to fine-tune their processes in real-time.

Context – The context of data defines its use. While all data is potentially useful, some data is clearly more useful than other, and that is defined by context. Who is the end user and how important is the data to helping them achieve strategic objectives? Operational data that improves production and profitability is a superset of data that e.g. tells a line worker to adjust a valve. Same basic data, different context. Tying process to the end user is what defines context.

Tracking and reporting – The good news is we now have access to vast amounts of data. That’s also the bad news, the signal to noise ratio is about to go off the charts (in favor of noise). New analytics systems that are adept at pattern recognition on a fundamentally bigger scale are going to be critical to any long-term success in harnessing what IoT can deliver. The flip side of this is, what do you do with the data once you have it? Who is the end user, and what do they need to be able to accomplish? Data visualization has always been a compelling way to make the complex understandable. While technical people may intuitively understand complex data sets, the business folks who can write big checks are the ones that ultimately need to feel comfortable with what they see, and being able to track the right data, and report on it in a way that makes sense is critical to ensuring the successful use of IoT.

Quality management – Data that does not factor in the quality associated with it is useless. IoT is not only extending the sources of data entering any system, it is also introducing ample opportunities for redundancy, overlap, errors, etc. Because of the compounding nature of the network effect, any inconsistencies that are the result of data quality issues are going to be leveraged into a much bigger mess. Check your local news feed any day for a great example of this (Equifax, Target, Yahoo, the list is sadly endless). If companies can screw up on this level with their existing data sources, imagine what they can do when their sources are exponentially bigger.

Enabling infrastructure – Any disruptive technology tends to enter the market organically, since multiple vendors with multiple standards are vying to be the one ring to rule them all. The challenge with IoT is that the coverage model is so broad, that its going to force systems that were not designed to work together from an information perspective to begin to collaborate. Want a simple example? BMC and ServiceNow (both massive IT infrastructure providers) have been at each others throats for years. Some companies are BMC shops, others like ServiceNow, and many actually use both, but for complementary functions. As the range of data sources delivered by IoT enter the IT ecosystem, its going to create a forcing function across the industry. The question is, can big companies or the channels that enable them (which are also big companies) force everyone to just get along?

I have zero doubt this is going to work. The industry has been consistently capable of adopting wildly disruptive technology and eventually rolling it into something that works for the end user. It will require companies to adapt (to paraphrase Darwin, its not the strongest or smartest that survive, it’s the most adaptable). Some companies that rode the last wave will disappear, some will get gobbled up, and some will adapt and drive the system forward for years, creating a new crop of billionaires in the process.

Integrated Data is the Key for State Agencies to Become More Customer-centric

During the past few years, there has been considerable discussion and media attention about the impact of digital natives (Millennials and Generation Z) in business and culture – specifically the younger generations’ expectations of how technology enables human interactions. These shifts in expectations aren’t isolated to private-sector business, state and local government agencies are being similarly challenged to think differently about technology and how it is used to modernize the relationships between citizens and their governments.

For a long period of time, state agencies have been built on a foundation of bureaucracy, process and structure, imposing governmental culture and value systems on the citizens and organizations that interact with them. The impact of this is not only in the inherent inefficiencies that have been created, but also in the steadily increasing governmental costs associated with providing service. Fortunately, the environment is changing. Government agencies are increasingly looking to private industry as an example of modern customer-centric interactions and the internal capabilities needed to enable them.

While these are fundamentally people-and-process interactions, they cannot take place in the modern environment without technology and data. As a result, it is no surprise that state IT functions find themselves at the center of their organizations’ transformation efforts. Government CIOs, in turn, are adopting the operating models, strategies, processes and technologies of their business counterparts to address their organizations’ needs.

State IT organizations have been some of the strongest proponents of IT service management, enterprise architecture and data governance standards. While it may appear that these approaches perpetuate the bureaucratic mindset, in reality, they establish a framework where the lines between government/private industry can be blurred, and citizens can benefit from the strengths of government organizations in new and innovative ways.

The modern state IT organization is:

Transparent – Service-centric approaches to state IT are enabling agencies to leverage an ecosystem of public and private partners to support their organizational mission. Simultaneously, data and processes are integrating for transparency, while harvesting insights that support continuous improvement.

Responsive – With ITSM process improvements, state IT organizations are not only capable of being more responsive to the needs of citizens and staff, but also to changes in the technology, legislative and business environments. Structured operations and high-quality data make it easier to identify and address changes and issues.

Connected – Enterprise Architecture is enabling agencies to maintain a trustworthy core of information needed to support decision-making and integrate that core with cutting-edge technology capabilities, such as IoT, geospatial data, operational technology and mobile devices to enable connected insights on-par with private-industry initiatives.

State processes have always been data-centric – collecting, processing and analyzing information to support the agency’s charter. Recently, however, the interpretation of this charter has changed to include a stronger focus on the efficient use of resources and the effectiveness of the organization in making a positive impact on its served community. While standards provide a framework for transparency, responsiveness and connectivity, achieving success relies strongly on implementation. How IT systems are implemented, both internally to the organization and in conjunction with the broader ecosystem of public and private partner organizations, is critical for determining whether the organization’s charter can be effectively fulfilled in the context of modern interactions and under the present-day cost constraints.

Blazent is a recognized industry leader in providing IT data quality solutions – helping companies, governments and state agencies receive more value from their existing IT systems investments by making data more complete and trustworthy. As your agency looks to leverage emerging technology to improve operations, harness the experience of service-provider partners and engage with citizens in modern and innovative ways, Blazent has the solutions and experience working with state governments and their agencies to streamline operations and literally generate millions in savings.

How Data-Integrity Technology Can Help Create a Patient Health Timeline

For healthcare providers to deliver the best diagnosis and treatment for their patients, the data on which they rely must be of the highest quality, completeness and trustworthiness. It must also accurately reflect how the patient’s health has changed during a period of time. One of the goals of health reform and digital medical records efforts during the past decade has been enabling the creation of unified medical records. This “ patient health timeline ” would be a complete digital chronology of the patient’s lifetime medical history (including symptoms, test results, diagnosis, provider notes and treatment activities) that providers can use when treating the patient.

An ambitious goal, the “patient health timeline” has been a difficult vision to realize due to the volume and fragmentation of patient health records – some of which have been digitized and some still reside in paper form only. Fortunately, for patients younger than 20, the majority of their health data exists in a digital form that can eventually be integrated. There are a number of technical challenges which presently exist that are currently preventing the realization of the “patient health timeline,” but data-quality management technology is rapidly helping companies to overcome some of them.

Fragmentation: Health records for a single patient are spread across the systems of a number of healthcare providers, insurance companies, pharmacies, hospitals and treatment centers. Each of these systems is unique, with no standard means of integrating patient data. Properly contextualizing data through an accurate set of relationships is key to establishing the integrity of integrated data from different sources.

Accuracy: There are portions of a patient’s health record which are relatively static throughout their lifetime (family medical history, allergies, chronic conditions and demographic data) and other portions that change with the patient’s health status and general aging (height/weight, reported symptoms, diagnosis and treatments, mental state, etc.). For the static portions (e.g., profile information), provider records often contain conflicting data, which must be reconciled and validated for accuracy. For the portions of the health timeline that change during a period of time, identifying accurate cause-and-effect relationships among data items is key to creating the actionable insights that providers need. Data validation and reconciliation technology can help companies both resolve data conflicts and identify relationships within data.

Patient Privacy: Regulations require patients to grant specific authorization for the use and sharing of personal health records. Compiling the patient health timeline would require the patient to grant authorization for the data to be integrated, for the use of the timeline data after it is compiled and to allow them to revoke authorization for specific data points or sets during the future. The patient health timeline, therefore, must be assembled in a structured and managed way that would enable disassembly in the future. Data quality management technology can help enable this transformation, so patients and providers know that the health data is both trustworthy and private.

Blazent is the leader in Data Quality, helping organizations drive downstream value by validating and transforming data into actionable intelligence. Nowhere is actionable intelligence needed more than in improving the quality of peoples’ health and wellness. How and when the patient health timeline will become a reality is still to be determined, but it is clear that data-integrity technology will be a critical component.

Your current discovery tool is not giving you what you need

Here is what you need to know

Your current discovery tool is not up to the task – period. Discovery tools are very good at performing the task for which they were designed – discovery. They are designed to look into a defined environment to identify, inventory and classify known types of objects and their direct connections and dependencies. What they don’t do is combine the data about the discovered objects with all of the other data about your environment to give you the big-picture perspective needed to enable effective decision making. This is a core requirement for nearly every business with which we’ve recently spoken, and the existing discovery tools in use are consistently insufficient for the task.

Your Discovery Tool is part of the problem

Discovery tools are contributing to your big-data problem by making data collection easier and faster, but failing to help you interpret that data and convert it into actionable information insights. Here are 5 ways your current discovery tools are giving you what you think you need, but are actually incapable of fulfilling your true needs.

  1. Discovery tools can indicate what is present, but not how it is used. A full appreciation of your IT environment requires an understanding of both its content (the objects within the environment) and context (the activities taking place). Discovery tools do a good job of capturing the content, within specific parameters (that is, discovery tools are often optimized to discover specific classes of CIs). Contextualization, however, often involves correlation of multiple discovery systems and integration with, for example, known business processes, job functions, etc. This operational context is critical to convert data into actionable insights.
  2. Discovery tools can’t indicate what was intended, only what is operating. Most IT environments were not created based upon a “grand-design,” but are the result of an evolutionary process over a number of years. Discovery tools can provide valuable insight into the objects that are operating in the environment today, but they are incapable of capturing those objects that may have been present during the past and the impact their legacy is having on the present. Fragmented implementations, historical technology limitations and design decisions are all likely responsible for why your environment is the way it is today. Historical perspective and intent cannot be captured by only looking at the present environment, and yet is critical as a basis for informed decisions about the future.
  3. Discovery tools can’t indicate what is missing, only what is present. Because discovery tools provide a single point of view of the operating environment, they can’t capture objects that are expected to be present in the environment, but for whatever reason are no longer present. Absence of data creates a gap that discovery tools are incapable of processing, which means the completeness of data is now at constant risk. Reconciling overlapping data sets from multiple sources/points of view is required to answer the completeness question.
  4. A single discovery tool won’t capture your entire environment; you will need many. Discovery tools are designed to look for a discrete set of objects of known characteristics, which means discovering tools are adept at finding and inventorying those specific classes of objects. With the diversity of modern IT ecosystems, it is practically impossible for a single discovery tool to understand and process all of the object types that are present. Some discovery tools are very good at capturing physical objects, while others capture software, etc. Discovery tools should be treated as specialists and the discovery process as a team sport – leveraging the capabilities of multiple players.
  5. Discovery tools don’t react to unknown objects very well. They are great for automating the discovery of things that are well known and easily identifiable, but nuanced variation among objects and the capture of new object types may require manual activities and/or additional tooling. At their core, discovery tools are rules-based systems. As the definitions of the rules improve, discovery tools will have the capability to capture and classify a greater percentage of the objects in the environment, but there will always be exceptions. These exceptions are often the most meaningful for providing environmental insights.

An important part of modern IT management

Discovery tools are important in the modern management of IT environments; however, independently they are clearly incapable of satisfying the overall need of most organizations. To gain the most value from discovery tool investments, companies must look at how they use multiple discovery tools together to provide a broad and holistic perspective on the environment.

As the data from discovery tools is integrated with known data from existing sources, integration, reconciliation of conflicts, addressing gaps and putting data into the correct context is critical. Blazent is an industry leader in providing the solutions needed to gather and focus on your discovery data and resolve the quality issues that are restraining you from achieving your goals of information insights and data-driven decision making.

Five key barriers to IT/OT integration and how to overcome them

Operational Technology (OT) consists of hardware and software that are designed to detect or cause changes in physical processes through direct monitoring and control of devices. As companies increasingly embrace OT, they face a dilemma as to whether to keep these new systems independent or integrate them with their existing IT systems. As IT leaders evaluate the alternatives, there are 5 key barriers to IT/OT integration to consider.

Business Process Knowledge – OT is an integrated part of the physical process itself and requires subject matter experts with both business process knowledge as well as technical skills related to the OT devices being used. IT staff members are often strong technologists, but lack the business and physical process expertise needed to support OT. Increasingly, companies are overcoming this challenge either through training manufacturing and operations staff on the technical skills or leveraging the use of a specialized partner to support the OT implementations.

Manageability & Support – OT systems are often distributed across geographic locations separate from the IT staff and are costly to connect to centralized management resources. This makes monitoring, control and the management of incidents and problems more difficult and costly. Remote management capabilities, advanced diagnostics, designed redundancies and self-healing capabilities built into the OT devices help overcome these challenges, at a price.

Dependency Risk – Two of the key challenges of enterprise IT environments are managing the complex web of dependencies and managing the risk of service impact when a dependent component fails or is unavailable. With traditional IT, the impact is typical to some human activity, and the user is able to mitigate impact through some type of manual activity. For OT, companies must be very careful managing the dependencies on IT components to avoid the risk of impacting physical processes when and where humans are not available to intervene and mitigate the situation. Since mitigation may not be an option with OT, controlling the dependencies is often the best approach to avoid impact.

Management of OT Data – The data produced by OT devices can be large, diverse in content, time sensitive for consumption and geographically distributed (sometimes not even connected to the corporate network). In comparison, most IT systems have some level of tolerance for time delays, are relatively constrained in size and content and reliably connected to company networks, making them accessible to the IT staff for data management and support. With OT systems, a company will need to decide whether the integration of the data into the overall enterprise data picture is necessary or whether the data created by the OT system can be left self-contained locally.

Security – IT systems are a common target for malicious behavior by those wishing to harm the company. The integration of OT systems with IT creates additional vulnerability targets with the potential of impacting not just people and data, but also physical processes. Segmentation of IT and OT systems to prevent cross-over security vulnerabilities as well as targeted security measures for the OT systems can help companies mitigate this risk.

Operational Technology has an important role in manufacturing and operations automation and in enabling the digital enterprise. As IT leaders and their organizations learn to embrace this technology in the overall IT landscape, there are some key decisions that must be made about where to integrate with existing systems and how to do it in a way that mitigates cost and risk. The impact of OT on IT cannot be underestimated.

Answers to 8 essential questions about assets that should be in your CMDB

Your Configuration Management Database (CMDB) is continuously increasing in size and complexity, driven by an endless list of components that need to be improved or new data sets that someone wants to add. You understand that more data doesn’t necessarily translate into more value. You wish someone could tell you, “What data do I actually need in my CMDB?” We can answer that question, and do it pretty precisely. At the core of any CMDB are the Asset/Configuration Item (CI) Records. Here are the answers to 8 essential questions about assets that are important to manage the IT ecosystem, and should be in your CMDB.

1. What are they? An accurate inventory of what assets and configuration items exist in your IT ecosystem is the foundation of your CMDB. Your asset/CI Records may come from discovery tools, physical inventories, supplier reports, change records, or even spreadsheets, but whatever their origin, you must know what assets you have in your environment.

2. Where are they? Asset location may not seem relevant at first, but the physical location of hardware, software and likely infrastructure impacts what types of SLAs you can provide to users, the cost of service contracts with suppliers and, in some areas, the amount of taxes you are required to pay. In many organizations, the physical location of assets is only captured as the “ship-to address” on the original purchase order; however, good practice dictates that you should update this information frequently. Some options may be GPS/RFID tracking, change records, physical inventory or triangulation from known fixed points on a network.

3. Why do we have them? Understanding the purpose of an asset is the key to unlocking the value it provides to the organization. Keep in mind that an asset’s purpose may change during time as the business evolves. The intended purpose when the asset was purchased may not be the same as the actual purpose it is serving today. Periodic review of dependencies, requests for change and usage/activity logs can help provide some insights into an asset’s purpose.

4. To what are they connected? Dependency information is critical for impact assessment, portfolio management, incident diagnosis and coordination of changes. Often, however, asset dependency data is incomplete, inaccurate and obsolete – providing only a partial picture to those who use the data for decision-making. When capturing and managing dependency data, it is important to keep in mind that the business/IT ecosystem is constantly evolving (particularly with the proliferation of cloud services), causing dependencies to assume important time attributes.

5. Who uses them? User activities and business processes should both be represented in the CMDB as CIs (they are part of your business/IT ecosystem). If not, then you are missing a tremendous opportunity to leverage the power of your ITSM system to understand how technology enables business performance. If you already have users, activities and processes in your CMDB, then the dependency relationships should frequently be updated from system transaction and access logs to show actual (not just intended) usage.

6. How much are they costing? Assets incur both direct and indirect costs for your organizations. Some examples may include support contracts, licensing, infrastructure capacity, maintenance and upgrades, service desk costs, taxes and management/overhead by IT staff. Understanding how much each asset is costing may not be easy to calculate, but this becomes the component cost for determining the total cost of providing services to users.

7. How old are they? Nothing is intended to be in your environment forever. Understanding the age and the expected, useful life of each of your assets helps you understand the past and future costs (TCO) and inform decisions about when to upgrade versus when to replace an asset. Asset age information should include not only when the asset was acquired, but also when significant upgrades/replacements occurred that might extend the expected, useful life of the asset to the organization.

8. How often are they changing? Change requests, feature backlogs and change management records provide valuable insights into the fitness of the asset for use (both intended use and incidental). This information should be available from other parts of your ITSM system (change, problem management, portfolio management, ), but it is critical that current and accurate information about change be considered a part of your asset records.

You should be able to find the answers to these 8 essential questions about assets in your CMDB. If you can’t, then you may have problems with either your data integration or asset data quality. If that is your situation, then Blazent can help. As industry leaders in data quality management solutions, Blazent can help you gather data from a broad set of sources and, through integration, correlation, reconciliation and contextualization, improve the quality of the core asset records in your CMDB, so you can maximize the decision-making value from your ITSM investments.

The future is closer than you think. Data is coming (and fast), how will you manage it?

What will you do when your job and the future of your company hinges on your ability to analyze almost every piece of data your company ever created against everything known about your markets, competitors and customers – and the impact of your decision will determine success or failure? That future is closer than you think. Data on an entirely different level is coming, and much faster than anyone realizes. Are you prepared for this new paradigm?

  • Technologists have been talking about “big-data” as a trend for more than a decade and that it is coming “” “Soon” is now in your rear-view mirror.
  • Companies have been capturing and storing operational and business process data for more than 20 years (sometimes longer), providing a deep vault of historical data, assuming you can access it.
  • IoT is leading to the creation of a massive stream of new operational data at an unprecedented rate. If you think volumes are high now, you’ve seen nothing yet.
  • The free flow of user-generated (un-curated) information across social media has enabled greater contextual insights than ever before, but concurrently the signal-to-noise ratio is off the charts.

What does all this mean? It means big data is already driving everything we do. The analytics capabilities of IT systems are becoming more sophisticated and easier for business leaders to use to analyze and tune their businesses. For them to be successful and make good decisions, however, the data on which they rely must be trustworthy, complete, accurate and inclusive of all available data sets.

Delivering the underlying quality data that leaders need is no small feat for the IT department. The problem has transformed from“ not enough data” to “too much of a good thing.” The challenge facing most organization is filtering through the noise in the data and amplifying the signal of information that is relevant and actionable for decision-making.

Inside your organization, historical data may be available in data warehouses and archives, but is likely fragmented and incomplete, depending on the source systems in use and the business processes being executed when the data was created. As IT systems have matured and more business and manufacturing processes are automated, the amount of operational (transactional) data created has increased. As part of stitching together business processes and managing changes across IT systems, data is often copied (and sometimes translated) multiple times, leading to large-scale data redundancy. IoT and OT are enabling instrumentation and the collection of an unprecedented volume and diversity of new operational data (often available in real-time) for operational monitoring and control, but the data from these collectors may have a limited useful life.

Outside your organization lies a wealth of contextual data about your customers, competitors and markets. During the past, experts and journalists published information for accuracy and objectivity – providing a baseline expectation of data quality. During the age of social media, a large volume of subjective user-generated content has replaced this curated information. Although this new content is lacking the objective rigor of its curated predecessors, the value associated with quality has been replaced with an exponential increase in data quantity and diversity available for consumption. The challenge for business leaders is filtering through the noise of opinion and conjecture to identify the real trends and insights that exist within publicly available data sets.

For you to make the big decisions on which the future of your company depend, you must be able to gather all of the available data – past, present, internal and external – and refine it into a trustworthy data set that can yield actionable insights, while simultaneously being continuously updated.

Machine Learning is re-inventing Business Process Optimization

Machine Learning is a game changer for business process optimization – enabling organizations to achieve levels of cost and quality efficiency never imagined previously. For the past 30 years, business process optimization was a tedious, time-consuming manual effort. Those tasked with this effort had to examine process output quality and review a very limited set of operational data to identify optimization opportunities based on historical process performance. Process changes would require re-measurement and comparison to pre-change data to evaluate the effectiveness of the change. Often, improvement impacts were either un-measurable or failed to satisfy the expectation of management.

With modern machine-learning capabilities, process management professionals are able to integrate a broad array of sensors and monitoring mechanisms to capture large volumes of operational data from their business processes. This data can be ingested, correlated and analyzed in real-time to provide a comprehensive view of process performance. Before machine learning, managing the signals from instrumented processes was limited to either pre-defined scenarios or the review of past performance. These limitations have now been removed.

Machine learning enables the instrumentation of a greater number of activities because of its capability to manage large volumes of data. During the past, process managers had to limit what monitors they set up to avoid information overload when processing the data being collected. Cloud-scale services combined with machine learning provide greater flexibility for process managers. They are able to collect data for “what-if” scenario modeling, as well as the training of the machine-learning system to “identify” relationships and events within the operational processes much more quickly than users are able to identify them manually.

One of the most promising potential benefits of machine learning is the “learning” aspect. Systems are not constrained to pre-defined rules and relationships – enabling them to adapt dynamically to changes in the data set from the business process and make inferences about problems in the process. These inferences can then be translated into events and incidents – potentially leading to automated corrective action and/or performance optimization of the process.

Even if companies are not ready to fully embrace machine-learning systems making decisions and taking actions without human intervention, there is tremendous near-term value in using machine-learning capabilities for correlation analysis and data validation to increase confidence and quality of data being used to drive operational insights. Manual scrubbing of data can be very costly and, in many cases, can offset (and negate) the potential benefits that data insights can provide to business process optimization. Machine learning is enabling higher quality insights to be obtained at a much lower cost than was previously achievable.

In business process optimization, there is an important distinction to be made between “change” and “improvement.” Machine-learning systems can correlate a large diversity of data sources – even without pre-defined relationships. They provide the ability to qualify operational (process) data with contextual (cost/value) data to help process managers quantify the impacts of inefficiencies and the potential benefits of changes. This is particularly important when developing a business justification for process optimization investments.

Machine learning is a true game changer for process optimization and process management professionals. Process analysis is now able to involve an exponentially larger volume of data inputs, process the data faster and at a much lower price point, and generate near-real-time insights with quantifiable impact assessments. This enables businesses to achieve higher levels of process optimization and be more agile to make changes when they are needed.