Mapping the Data Lifecycle

In business, we rely on applications, services, and data to improve the efficiency and effectiveness of the organization. One of the side-effects of having a sophisticated capability is thinking we now have the capacity to analyze all the data that enters our IT system; the problem that surfaces consistently  is losing track of key aspects of the data and what it reveals about our organization. The actual insight provided by this information comes from having a complete view of the data from its creation to its retirement. This data lifecycle should be our focus, since it will help us better understand its actual value and impact on operations, as well as how it can be used to ensure the organization can grow and prosper.

The Data Lifecycle

There are three main areas to understand with respect to the lifecycle of data once it is created:

  • Maintenance
  • Entitlement
  • Retirement

Each area is important to the support of operational activities, which ultimately contribute to positive business outcomes. Understanding the entire data lifecycle can provide valuable insight into how to run the organization better.

Maintenance

Maintenance of data is a vast area that demands many resources. It involves understanding what should be remediated versus what could be eliminated.

The key to any data maintenance plan is a data quality improvement strategy. This is required across the entire organization in order to leverage all available data source types. The more frequently these maintenance activities are performed, the more likely it is that organizations will be able to leverage data for beneficial purposes.

Entitlement

Entitlement is a different type of effort. It focuses on making sure the right people have access to the right data.

Only the appropriate or authorized people will fully understand the meaning of data they view or consume, since these individuals will have innate knowledge about the context of the data and be able to interpret what it means. For example, someone with a strong security background might detect data modification patterns that suggest a possible breach, or a systemic issue about how the data is being maintained. Data at the discrete or elemental level might not convey this message unless individuals with knowledge of the entire lifecycle are able to understand it and translate it, so others can take appropriate actions.

Retirement

Lastly is the retirement of data, which in some cases is the most vital step.

When we recognize that data or a data source is no longer adding value to the organization, it must be disposed of or eliminated from further consideration to avoid negatively skewing or influencing decisions. In addition, the timely retirement of data sources provides valuable information about the data itself, as well as the usefulness and longevity of the associated data sources. It could also be the reason why decisions based on this type of data may not have matched expectations.

As everyone in IT knows, there is no shortage of data and the tools to interpret it, but it is critical to have a systematic process to manage the data lifecycle for it to be fully leveraged. Without such tools and processes, data will just become more noise in our environment and prevent us from reaching successful outcomes. Outcomes will only begin to change when we look at data in a more comprehensive and holistic manner that factors in the entire lifecycle; only then will we have all the information needed to help the organization make the best decisions possible.

Blazent supports a holistic view of the data lifecycle by automating the application and maintenance of data quality through validation, normalization and mapping of data relationships.

Work in Process and Asset Provisioning

Let’s look upstream of your Configuration Management Data Base (CMDB)/Asset repository. How does the data enter the system? Think it’s a streamlined process?  Think again.

Asset provisioning has always been a process IT professionals love to hate. Stories are frequently heard of six to nine-month lead times for obtaining something as pedestrian as servers. How is an organization expected to compete in the digital world with those kinds of timelines?

For some years, large IT organizations have been implementing “service catalogs” that formalize request management, such as requests for new IT assets or resources. Asset provisioning, as a requested process, results in a new CMDB record. Formalized service request management has helped with consolidating this type of work and understanding the overall demand for IT services, but complaints persist of IT slowness.

Implementing a service request process is only part of the problem. Many people do not realize that any formalized process like Asset Provisioning represents a queue.

Long line of diverse professional business people standing in a queue in profile isolated on white

There are certain ways in which queues behave. For example, if there are 10 people in line, and each person takes 10 minutes to serve, you will have to wait 100 minutes if you are the 11th person to join the line.

This may seem obvious, but things get more complicated when:

  • People are arriving in the line at different times
  • People’s requests take different amounts of time
  • People’s level of urgency and priority can be different (e.g. a different line for those with platinum airline status)
  • You can “open new windows” for service

Lean Product Development specialist Don Reinertsen believes that queues are often overlooked as a root cause of delays in product delivery, where “Product” is an IT service. Probably the biggest issue seen in IT service request processes is queue overload.

This is discussed in the well-known DevOps novel, The Phoenix Project. Understanding and managing work in process is a theme throughout this highly recommended book. They realize that if work must pass through a series of highly-used queues, the amount of time spent waiting in queue explodes.

Much of the delay seen in Asset Provisioning processes can be traced to overloaded queues; too many approvals, too few people working them. The work waits far more than it is acted upon. Customer satisfaction declines severely, and people start looking for alternatives. If the Asset team is being bypassed, your CMDB or Asset repository will suffer the consequences.

Do you know how your Asset process is functioning, in terms of being one more queue within your organization’s overall IT delivery operating model? Are you managing for “efficiency,” at the cost of imposing weeks or months-long delays on your customers? Enterprise IT organizations have long had a reputation for being slow. Sometimes this is blamed on bad culture, poor leadership, or even poor process.

But even the best Asset process will bog down if it is not understood as a queue, and in the enterprise, it’s actually a system of queues that comprise the digital operating model. Providing capacity for the digital product team is a critical task, and understanding queuing theory will help the Asset team support an organization’s value journey. Understanding the process in this way will also help to deploy automation in a valuable and intelligent manner. Ultimately, this will result in a well-regarded Asset process, and improved process quality will lead to improved CMDB and Asset data quality. This is just one aspect of having accurate, quality data within the CMDB. The value of other downstream operational benefits can be found in this Blazent whitepaper on the top 5 costs associated with poor data quality.

Next: What happens when queues are unmanaged? Your Cost of Delay goes up. What’s Cost of Delay? Stay tuned.

For an introductory discussion of these issues in the context of a great novel, read The Phoenix Project by Gene Kim (see especially chapter 23). For a deeper dive, see Don Reinertsen’s Principles of Product Development Flow.

Data Source Consolidation

Four Core Data Source Consolidation Considerations 

Every organization depends on a range of data sources to support different operational initiatives, and as we all know, these data sources are either additive, or continuously expanding, changing, merging, etc. The dynamics of managing and integrating data sources are complex and highly variable, and often result in having new sources which are similar to or duplicate parts of existing sources.

The challenge is that these disparate data sources must work together and share information. In most instances, the sheer number of sources and their data can’t keep this from happening without adding significant complexity and effort. The continuous expansion of data sources is a key driver for consolidation initiatives, and sources that can be configured to work together can improve organizational efficiency and reduce capital expenditures.

Document all Data Sources in Scope

The first step to execute any consolidation initiative is to document and assess all the data sources within the scope of the initiative. It is not only important to understand what each source contains, but also how various teams use it. For example, some teams might consume and use the data source as-is, while others may aggregate and normalize it with other data prior to use. Documentation of the data sources must also include any integrations, which might need to be eliminated or re-established with a different source once consolidation is complete.

Evaluate Their Use

When all the data sources are identified and documented, it is important to evaluate each for its intended use. This means assessing the data sources for expiring licenses, retiring technologies or strategic plans to move from those tools and their respective sources of data. This type of evaluation minimizes disruption to users from repeated changes to the sources they leverage on a daily basis. It also offers potential opportunities to work jointly with already budgeted initiatives to help with the consolidation.

Planning and Scheduling the Consolidation

The next stage is planning and scheduling, which will help determine the sequence in which the data sources are consolidated. Integrations and their interdependencies can cause the sequence of consolidations to become very complicated, which implies it may be necessary to re-sequence the order temporarily to keep operations flowing smoothly. If this is done, it is important not to invest too much in temporary measures since they must be moved again to the permanent source.

Analysis and Comparison of Data Models

The last stage of preparation before a data consolidation project begins is the analysis and comparison of data models and structures. The data sources will not have identical models, and defining the mappings between them will require a considerable amount of effort. Before beginning, costs, additional time, and increased complexity of data manipulation due to structural differences must be considered. It is also important to factor in the long-term implications of maintaining these manipulations if there are no plans to use technology to automate them. The cost of automation technologies may seem high until you compare them to the cost of doing the same thing manually.

Stagger the Data Source Consolidation

Once the analysis and planning are complete, the actual consolidation of the data sources can begin. It’s important to be prepared for the consolidation to occur in a staggered manner, since there are always delays due to some data sources not being ready for moves or changes. There will also be some data sources whose consolidation plans must be expedited because of external dependencies. The consolidation plan must be flexible, but disciplined, since it is the nature of this type of effort to be complex, interdependent and subject to unexpected changes.

In Summary

Consolidation efforts are a continuous requirement due to the massive organizational and operational burdens associated with the constant expansion of information sources and the data within them. The costs of not maintaining this pattern of growth are unsustainable; efforts to consolidate data sources and simplify operations are not easy, but can reap significant savings and increase operational efficiencies. In the short-term, it may appear too complex to execute this kind of initiative; however, with the proper assessment, planning and use of available technologies and services, the return far outweighs the challenges of maintaining the status quo.

Blazent’s Data Quality solutions provide organizations with the flexibility to retain all existing data sources and still benefit from consolidation by intelligently merging multiple sources. The process Blazent creates delivers consolidated records that are complete, accurate and current.

Your friendly neighborhood data person

As digital transformation becomes mainstream and drives organizational growth, the enterprise information pipeline continues to expand, driven by a relentless acceleration in the scale and complexity of the underlying data driving the digital expansion.

This is less of an issue for organizations focused on a single platform or product.  However, for companies with multiple product value streams, platforms, and lines of business, achieving any sort of consistent view of their data ecosystem can be difficult at best.

This is not a new problem. At scale, data has to be managed, similar to any other corporate resource such as people, money or brand. Data Management has existed for years as a professional practice, with the Data Management Association (DMA) as the primary industry organization, which publishes a set of best practices known as the Data Management Body of Knowledge (DMBOK).

What exactly is meant by Data Management? Is it simply focused on the day-to-day activities of database administrators? Any Data Manager would answer with a resounding no; Data Management actually operates at a higher, “logical” level, and is concerned with such topics as:

  • Enterprise glossaries and ontologies, that is, describing the meaning of things
  • Defining the highest level structure and relationships of enterprise concepts
  • Describing which systems store which data (designating systems of record), as well as providing data feeds
  • Implementing Enterprise Master Data Management, which includes defining key code sets, abbreviations and numbering schemes
  • Ensuring that data quality is measured and maintained
  • Managing the security aspects of data, such as designating what data is restricted, confidential, etc.

Enterprise Data Management is also rapidly converging with Records Management, which is the traditional practice of managing official paper-based documentation, which itself is rapidly transforming into E-records Management. Since these records are stored as data, the associated databases must now comply with records management policies, such as identifying and applying retention schedules. This is a mandatory, legally driven requirement affecting broad swaths of the IT industry.

What does all this mean to today’s asset and configuration managers? Data Managers (and their close colleagues, the data architects) are invaluable in large-scale environments, or when the enterprise is faced with complex data-centric problems. As an example, when an organization considers integrating data from two disparate systems, data managers may be operating under the assumption that they are already integrated in an existing operational data store.

When designing a system to capture a certain data topic, data managers typically know what existing system is already designated as the system of record for that topic. However, if an enterprise has a continuing data quality problem, then a good data manager or architect can be consulted on areas such as exception reporting and auditable controls, which can help to continually improve data.

Is it possible the problem has become too complex? With multiple sources to integrate, it’s easy to see ripe opportunities for data overlap or redundancy. A capable data analyst should have the skills needed to solve such problems; he or she should know how to profile data sets, reconcile them and identify their strengths and weaknesses. A data analyst should also know how a given set of sources can be combined to become the required solution.

A data professional can certainly accelerate the implementation of a Data Quality Management solution; experienced data people should know the location of the source of truth for mission-critical data and how the integration and contextualization of that data can be leveraged to drive downstream value. They can then use this information to define enterprise data requirements and an overall data flow architecture.

Given the complexity of these issues, where do you find your friendly neighborhood data person? Often, he or she is associated with data warehousing, business intelligence or analytics groups, or less often, he or she may be aligned with a technical database administration team. It’s also important to keep in mind that database administrators (DBAs) are not data administrators.

The IT Service Management world, where Asset and Configuration Management lives, has often been oblivious to the concerns of data management. In a digitally transforming world, however, these two areas have every reason to become more integrated and provide important insights and lessons for IT professionals.

The flow of data into and through an organization, how that data is managed and how data quality can be optimized to drive value across the enterprise are mission critical considerations not just for DBAs and the IT organization, but also for the C-Suite as well.

The Role of Data Quality in Digital Business Transformation

Companies have always struggled to leverage data quality to the extent necessary to run their business at an optimal level. Their challenges in the digital world are accelerating operational pace, increasing volume of data, variable modification rates and lastly, wide-spread consumption. In the modern enterprise, data is pervasive and these factors have become a difficult challenge in driving digital transformation initiatives because they can’t be managed with old analog procedures. Previous methods and workarounds are insufficient; in the digital business, the weaknesses of the old methods are magnified and inhibit business growth and success.

Today’s fast paced digital economy requires real or near real-time data to properly respond to events. Companies cannot effectively manage business operations to enable informed decisions if their data is incomplete or inaccurate.  Information which was gathered, manipulated and reported historically may no longer accurately portray what is going on within IT or business units. To be effective, decisions must be based on the data being current and reflect what is happening internally at that time.

Compounding the organization’s transformation challenge is the rate at which data is generated. Data volumes and change rates are staggering, accelerating, and further complicated by the adoption of new technologies which continue to spawn additional data sources. This results in more unsynchronized data sources that need to be managed while continuing to grow in volume and complexity.

Don’t assume your digital transformation endeavor will guarantee seamless transmission and sharing of information. With increased volume and availability, there will be increased consumption and use of data by a broader audience. By making the data available to users, the assumption is that it is of high quality as well as accurate. This is where problems surface; users tend to trust the data without validation from any other data source because it is their system of record, and they therefore consume the data without hesitation. This is where bad decisions start that can cause major operational challenges.

The adaptation to a data-driven world has had its success, but has also had its challenges. There needs to be zero tolerance for poor data quality within a data-driven business. Understanding this is the key to addressing it; recognizing that this needs to be tackled is the starting point to minimize the negative impact of these factors.

Blazent’s solutions address enterprise data quality improvement efforts by presenting the most accurate, current, and contextualized state of IT data. Data Quality initiatives must be enabled and carefully monitored if substantive operational improvements are to be achieved. You can no longer rely on the old methods of periodically mashing data together into a report and sending it out across the company. Without changing archaic methods for managing data, your organization will become just another casualty of a failed data-driven transformation initiative.

Now IT is your problem (part 2 of 2)

One of the most interesting and successful aspects of ServiceNow and other IT service request management tools is the way they have moved into non-IT areas. As business becomes more complex and digital, the desire for process automation has expanded. Facilities, Marketing, Human Resources, Legal, and other areas find that they are offering internal “services,” and that a structured service catalog is just as beneficial for them as for the CIO.

We also see that there is less and less difference between internally-facing and externally-facing services. In fact, Amazon directed its product teams to build everything as if it might eventually be provided as a service to external users.

In such models, where is the divide between “IT” and “The Business”? If a computer system is down, and someone is calling in to report the incident, how is this different if that person is an external customer?

It’s different in one important way: your internal Incident Management process is now, whether you realize it or not, part of Customer Relationship Management. In fact, IT service management tools have often been used in this way, but we are seeing a much broader across-the-board convergence:

Digital systems are complex because of their need to enable the delivery of value to customers. Customers often require assistance, troubleshooting, or even additional features. As organizations evolve through digital  transformation, there may be less and less sense in keeping a distinction between ITSM and operational customer management. (Customer relationship management in terms of the sales cycle is of course a different thing.)

In the same way, IT projects and portfolios simply become enterprise products, with more or less digital components:

roducts are outcome-focused, and represent organizational value; because of this, product management is gaining increased attention, and project management per se is being questioned. IT service and application portfolios may remain, but increasingly the question is what part of a market-facing portfolio do they represent?

Finally, as enterprise operations becomes more digitized, IT systems management merges with enterprise operations as a whole:

Asset and Configuration Management are no longer just “IT.” The bulldozers, trucks, and railcar wheels are Internet of Things endpoints. Their digital exhaust represents a massive volume of dynamically changing data, ripe for analysis for risks and opportunities. And traditional, non-IT “operations” is increasingly digital, requiring the same tools, skills, and risk controls as traditional “IT.”

Asset Management in particular converges with the operational enterprise concern of Fixed Asset Management. Reconciling IT and Fixed Asset systems has long been a challenge. As assets increasingly are endpoints, or at least have digital IDs, will it be possible for these two areas to merge? The answer is yes, but expect this to be a steep climb.

Further relationships could be drawn between all these areas. Vendors have increasing digital components in all their products, so all vendor management needs some ability to assess digital offerings for value and risk.

As we continue in this digital transformation, what is the role of Asset Management? Asset management is too critical a concern to be left to ad-hoc approaches, and yet with virtualization, automation, and Cloud we can have both agility and well-managed assets.  Infoworld executive editor Galen Gruman suggests that the new CIO might focus on risk, processes, and/or platforms, all of which imply asset management.

At the end of the day, the digitally transformed organization still must understand itself, its investments, its risks, and its responsibilities. Asset management, high quality asset data, and most critically , Data Quality Management, remain as important as ever in this new world

How much is your bad data really costing you?

Every day employees spend more time than they should trying to find the best available data to make decisions. While some of those decisions are more vital to business operations than others, how would people know in the moment? Leveraging poor quality data for one key decision could have a significant, expansive downstream impact on the corporate bottom line. The bigger issue is that if the data is of poor quality, it could cost the organization unknown amounts of money in a variety of ways that may only surface in “big case” scenarios. However, the smaller ones which go undetected and become incremental for months or years, could cost far more.

Individuals are focused on getting their job done to the best of their ability and will use the sources of information that are available to them. In many cases these sources have no measures to express the quality of the content and individuals use them assuming they are accurate and complete; something that is broadly acknowledged as being incorrect. This is why it is so important to improve the quality of all data sources that are used for making operational business decisions. It’s also important to retire those which are insufficiently accurate or current to warrant keeping around. There may also be significant savings to be gained by retiring them in terms of the savings from unused licenses and support, which is something we will get into in a future blog.

When entering into a situation where a decision needs to be made, an individual will seek out the data sources they need and are aware of to help make that decision. In most cases, they already know how much time and effort they will need to exert to massage the data in order to get it into the right format or context. They may even know that the data needs to be validated against a second or third source before being acted on.  All this expended effort takes significant time, and time is money.

If you could improve the quality, remove the need to question the accuracy, and eliminate the comparisons against secondary or tertiary validations, your employees could not only act much faster, their decisions would be more informed, leading to savings and improved business outcomes. Organizations need to help their employees by improving the quality of data they are consuming.  Improved data quality is vital to all business operations and reducing the inherent risk of making poorly informed decisions helps with resource retention.

Finding the best data for normal daily decisions doesn’t have to be a struggle. The environment can be improved by simply improving the data quality of some sources and eliminating those which are too far gone to save.

Once this is accomplished, the ability to understand how decisions impact business operations become much clearer. Employees can then act far more intelligently in prioritizing work and focusing on those efforts that have greater downstream impacts.

 

Now IT is your problem (part 1)

What exactly do people mean when they say “IT”?

  • The computers? The networks?
  • The people who run them?
  • That organization that loves to say “no” and is always slow and expensive?
  • Dilbert’s “Preventer of Information Services”?

IT has been joked about, had its massive failures listed, and had the end of the CIO predicted, and yet they’re having the last laugh.

It’s true that many organizations are decentralizing information technology. What were once called “shadow” systems or “rogue IT” are now the reality, as business units mature through digital transformation. In this evolving context, having a centralized Chief Information Officer in charge of all technology makes about as much sense as having all employees report to the Director of HR.

However, there is a twist. Business executives, many of whom have the perception of IT as “bureaucratic,” may look forward to having their “own” technology and technologists. They may even think they can manage technology better, that it can’t be “that difficult.” Process? Who needs it?

When and if the first few teams are instituted “outside” of IT control, including the ability to acquire computing capacity and build functionality on it, they may move quite quickly, giving their new business sponsors great satisfaction.

But problems soon arise, perhaps as predicted by the CIO when decentralization process started:

  • How are these systems secured?
  • Are they compliant with licensing requirements?
  • Are they backed up?
  • How do you know the data in the system is accurate?

The new business-owned IT finds that some consistent way of building and running new functionality is still required. They may even find that, although computing capacity, developers, and operations staff are directly funded from their budget, enterprise processes and standards represent important guidance that would simply have to be re-invented.

Take for example configuration management. This is the kind of activity business leaders love to hate. At first glance, it might seem far removed from the bottom line. And yet in the digital economy, it’s front and center. Lines of business deploying vast numbers of Internet of Things endpoints have no choice but to inventory and control them; the risk is far too great otherwise. And the infrastructures and services required to do so are not easily or cheaply constructed.

As technology becomes more and more diverse, employing NoSQL approaches that are tolerant of new, diverse data types becomes essential. Understanding security and risk will increasingly require sophisticated analysis across the digital exhaust of these massive, complex infrastructures. At the end of the day, the financial and organizational reporting structures will be less important than the overall digital pipeline. In this new world, data still needs to be governed and managed, auditors will still be evaluating operational practices and risk controls, regulators will still ask for evidence, and will not be interested in whether the organization has centralized IT or distributed digital services.

In spite of this, digital technology will remain with the lines of business, for the same reason that the HR director does not “manage all the people,” nor the CFO “manage all the money.” The Agile movement, as a key enabler of digital transformation, emphasizes that digital products require close collaboration between developers and business experts, along with the fastest possible feedback from the market. The days of the “order taking” CIO organization are over.

What becomes of traditional IT capabilities in this new world? There is a great irony. We have talked for many years about how “IT” must become closer to “The Business.” This is now happening, and now the business is being transformed as much or more than traditional IT.

In this new model, incident management becomes customer service management, configuration management pervades business operations, the Internet of Things becomes the production shop floor, the IT portfolio becomes the digital product portfolio, and IT vendor management simply becomes vendor management, now requiring product specialists in Cloud services. And data management of the digital infrastructure simply becomes data management.

Despite the shift of focus to “The Business”, IT remains the organization that is best equipped to handle the applications once they are deployed. IT sets the bar for security, risk controls and IT Service Management centered around the Configuration Management Database (CMDB).

Automation enables you to stay ahead of the data deluge

Anyone reading this would probably agree that nearly all enterprises are in an endless struggle to maintain their operational data. This is not new, in fact it’s something that has been an ongoing challenge since the first companies began to operate at scale. In the very early days there were people whose actual job title was “Computer” (that is, one who computes). Even back then employees were a key element of data entry and correction and they did an acceptable job with it.

Of course that was then, and this is now. The problem is that when data volumes increased employees were no longer able to keep up with the data deluge and accuracy (that is, quality) began to suffer. This problem is growing exponentially, and there is realistically no end in sight.  In order to survive (or thrive, which would be better), organizations need to develop new methodologies that enable them to stay ahead of the effort to maintain quality data, and as we see in the press nearly every day, that isn’t happening as often as it should.

Organizations have always needed high quality, accurate data, whether it was delivered in the 1800s by a “Computer” named Sears Cook Walker (the first person who actually had that title) or whether it’s delivered now by a cloud based AWS server farm. What’s been an interesting side effect is that advances in technology have not only brought more capabilities, they’ve brought a steady increase in the ability to measure and track pretty much every detail of the operation. With this increase in measurement capability paralleled by an increase in data, this process is no longer manageable by humans (sorry Sears). Even with blindingly fast machines doing the work, there are increasing gaps in what can be validated. The pressure to process more and do it faster tends to increase the likelihood of mistakes. Mistakes that become endemic to business processes are unacceptable for an organization relying on accurate operational data to make important business decisions.

The good news is that the continuous advancement of automation offer opportunities to address the issues of volume, speed and accuracy. As with any technology, (or actually anything, really), the right tool must be selected and it must then be implemented properly to realize the benefits. Automation also offers process consistency, which under the current workload is well beyond the scope of humans to manage. As a side benefit, automation tools also address resources issues where individuals don’t want to or can’t perform mundane and repetitive data quality tasks.

All individuals want to feel that they are positively contributing to the growth and success of the organization.  Working to enter and correct data is simply a task that most employees would rather not do on a regular basis, and it’s a poor use of resource dollars. Automation technologies can help in that they can take away some of these lower end tasks and perform them better, faster and more accurately.

Automation technologies continue to mature but many offer considerable enhanced capabilities which are unrealistic to expect from individuals doing so manually. Even the most simple aggregation and normalization of data from two sources is an immense effort for an individual.  For automation technology however, this is just a starting point and better, it can do it 24/7 with consistent results every time. The benefit grows considerably when we consider that the aggregation is typically across far more than 2 data sources, something that is not realistic for an individual to perform on a regular basis and definitely not with any acceptable speed, consistency or quality.

Companies will continue to struggle with the aggregation and normalization of data which is why they need to look deeper into automation tools that deliver higher quality consistently and faster. They will also get the added benefit of relieving their employees of the mundane tasks and allow them to reallocate those resources to analyze and assess the results rather than collect and sort the raw data.

In order to thrive, enterprises need to take a step back and look at the longer term patterns that are driving their operational decisions. Anytime there is a disconnect, anytime something happens that makes you want to hit the pause button, chances are there is an underlying data issue. More specifically, there is an underlying data quality issue. The fact that we live in a fully-connected, mobile, social, cloud-base environment should be seen as the massive opportunity it is, but that opportunity comes with data issues that need to be addressed head on.

Quality data is the key ingredient to better business outcomes

Quality inputs are a core requirement for a quality output, and this is true for nearly any profession, regardless of whether you’re a chef, or an IT professional. Your service delivery organization needs high quality reliable data before it can assemble it properly for the decision makers, in order to ensure positive business outcomes. You need to not only find those reliable data sources, but also put in place the necessary quality controls and checks. Those high quality data elements are the vital ingredients that make up the products and services your customers consume.

If you ever speak to chefs as they describe the dishes they provide, you will hear them talk about the same things in nearly the same terms (obviously adjusted for domain). They believe that the only way they can deliver the best quality product for their customers is to ensure the ingredients they put into their dishes are consistently of the highest quality. It’s simply a philosophy of the output will never be better than the input, or in the common vernacular, garbage in, garbage out. Improve the input quality and the end result also is improved.

This concept isn’t exclusive to food products; it also applies to data used in any sort of business making decision. It’s especially applicable when we’re talking about aggregation of data from various sources or migrating data from one database to another. It makes no sense to migrate data to a new source when its quality level is unknown or worse, is known to be poor and unreliable. Yet, we see attempts to simply lift and move data to a new source all the time. What is shocking is that people are surprised the poor quality data wasn’t miraculously corrected in the migration. Data doesn’t magically get better on its own, and in fact a migration runs a real risk of making things worse without a directed effort to improve the data as part of the migration process.

Every employee makes decisions based on the data that is available to them, so it’s important that they have the best quality data available. This is similar to the chef who carefully picks only the best leaves from their personal herb garden, because these are the only ones they would include in their special gourmet dish. As IT professionals, we also need to carefully evaluate our data sources as high quality ingredients. We need to decide which sources are best to use for our systems and which data elements within them we can count on to help us improve business outcomes. For long term success, we also need to determine which processes will be necessary to maintain the desired quality levels. It is Important to keep in mind that not all of them have to be used, some may only need to be used in a specific context, and some may be unable to provide the data quality we seek.

Every product or service created by an organization can only be as good as those elements that go into it. IT organizations need to employ more rigorous data quality controls and audit checks to ensure that those elements are of the very best quality. When sources or element data is not of the optimum quality, an alternative has to be used or considered. Regular checks and balances must be in place between systems to ensure that they don’t begin to diverge and potentially corrupt one another.

If the effort is intended for just a one-time migration, it is even more important to have a rigorous mechanism to normalize and cross reference the data for quality and completeness. This is because there will be no second chance to identify discrepancies and the end product will be negatively impacted. Bottom line? Quality will always matter, and the more critical the end product, the more important quality becomes. Getting an ingredient in a gourmet dish wrong will have a small, localized effect, but migrating bad data in a health care application can be disastrous. Bad data and its effects are entirely manageable; the tools are available, and the best practices are well established.