3 Hurdles To Operationalizing Financial Data

© AdobeStock
How to make sure your data pipeline is giving you the information you need.

Do you manage your data, or does your data manage you? With budget planning season approaching, the CFO’s team needs the right information at their fingertips at a moment’s notice. Managing data stuck in disparate applications can lead to fatigue, frustration and anxiety over data integrity. Is it complete? Did it combine properly into the report we need for strategic planning? What is missing? How can we better operationalize data to derive more benefit?

When organizations are deploying, upgrading or refining their financial systems, you often hear lip service about the importance of data, but it can be an underappreciated asset. The reality is that most organizations either underestimate the effort involved in developing and deploying a true data integration strategy, or fail to commit the proper time and resources to it.

Awash in data, CFOs are “in a unique position to drive positive change for the business, pivoting the finance function from measuring value to creating value and driving growth,” notes Accenture. But achieving desired outcomes starts with good data, says Deloitte in its CFO Guide to Data Management Strategy—good, precise data, integrated in real-time for accurate scenario planning. Yet in an Accenture survey, only 16% of CFOs said they are getting data at the scale they need. If you’re serious about operationalizing data to better optimize cash flow, improve forecast accuracy and coordinate business-wide initiatives, you need a data pipeline.

Laying the groundwork before laying the pipe

A data pipeline refers to the series of interdependent processes enabling the flow of data from one system to another—applications to data warehouses, analytics databases or into payment processing systems, for example. Common steps enabled by data pipelines include data extracting, cleansing, transforming, augmenting, enriching, filtering, classifying, aggregating, mapping, loading and protecting data integrity.

Data generated in one application may feed multiple data pipelines, and those pipelines may in turn carry data into multiple other pipelines or applications. Businesses today are moving data among an ever-increasing number of applications, which are in turn connected to multiple users. One annual SaaS trends survey estimated that the typical mid-market company uses 185 different apps, but could have several thousand app-to-person connections. This makes the efficiency of data pipelines a crucial consideration in planning and development.

But what challenges do financial teams and their information technology counterparts face in creating, testing and deploying data pipelines? There are three key potential hurdles to jump before continuing on the journey:

  1. Process – discovering data
  2. Technology – avoiding myopia
  3. People – establishing ownership

Process. A data integration strategy is multifaceted but should always begin with data discovery, the process for an organization to catalogue the data available and determine how it can be used to provide actionable insights.

During the data discovery phase, you seek to understand the type and frequency of data—where it lives, if it is structured or unstructured, how often it is updated, if it will be streamed or updated in batch operations—and how it relates to the various business processes it may support. Only after an organization invests in data discovery properly should it begin to create data pipelines. This may sound obvious but, unfortunately, this critical step is bypassed all too often.

Technology. A common pitfall is technology myopia. There are myriad data integration providers and large organizations are already likely to have an investment in at least one technology. This can sometimes lead to a myopic view that “we are a <insert technology vendor name/platform here> shop.” There is obviously value in leveraging investments in existing technology around which internal competencies are based, but this is not without potential downsides. An organization should be open to new approaches and conduct a thorough technology evaluation as there may be solutions better attuned to the needs of the data pipelines to support the applications used by a given business process.

People. Another key consideration is establishing data and process ownership. This theme often goes hand-in-glove with technology bigotry. Unfortunately, some people view technology solution choices as power. A mini-fiefdom is not a foreign concept to anyone who has worked in a corporate setting. The drawback to this method of engagement with business partners is that it rarely drives true partnership. Rather than collaborating to achieve the most effective result, precious time and resources are wasted navigating unnecessary politics.

A well-designed data pipeline can empower and elevate multiple constituents within an organization. The IT team should serve to support the connective tissue that is essential to effective data pipelines. Business users on the CFO’s team served by the pipelines can—perhaps should—own the relationship between the data models connected by the pipelines. They are closest to the data and have the deepest understanding of it. Such a partnership between Finance and IT drives efficiency in the life cycle of pipeline development, testing and maintenance. But people may need a reminder that keeping tight control of information won’t reinforce power; it’s the sharing of information and subsequent results that makes them truly powerful.

Give short shrift to any of these elements and you risk the success of your data pipeline project. But taking all three into account prior to building your pipeline will establish a smooth foundation and put you firmly on the right path.


  • Get the StrategicCFO360 Briefing

    Sign up today to get weekly access to the latest issues affecting CFOs in every industry
  • MORE INSIGHTS