Mainframe Blog

Making the Mainframe Central to Your Data Fabric Strategy | Broadcom

Written by Deborah Carbo | Jan 30, 2023 2:14:00 PM

How can you have a data fabric strategy without the most valuable data in the enterprise—mainframe data? That’s like having a dinner party without the main course.

Data management consistently ranks at the top as a key challenge faced by enterprise organizations. This reality is not surprising as data volume is growing exponentially at a compound annual rate of 23% per year through 2025, according to IDC estimates.1

The surge of data, coupled with an increasing need to generate insights for a competitive edge, opens significant opportunities for organizations with the right technology stack and architecture. Add in the fact that data is increasingly diverse, complex, and distributed across disparate mediums, and it becomes clear that traditional data management techniques fall short.

Against this backdrop, data fabric architecture promises to govern, secure, and organize data through a metadata layer that enables self-service capabilities for end-users across different domains and organizational boundaries. The benefits of employing a data fabric architecture for data management are not new—they include increased visibility and insights, access and control, and security. “Data fabric” itself is just a new term for something enterprises have been working on for some time. That focus has been gaining momentum as compute power and storage is becoming cheaper and with data growth surging unabated.

Implementing a successful data fabric strategy requires fundamental shifts in how data is organized and served. It requires a holistic architecture whose goal, as Gartner frames, is to provide “frictionless access and sharing of data in a distributed network environment.”2 Under this unified data management framework, existing data sources, whether sitting in traditional data warehouses, data lakes, cloud data stores, or on the Mainframe, can remain in place—alleviating risks and unnecessary costs associated with having multiple copies of the same data.

Not Just Any Data Source

As the system of record for valuable data generated from transactions, no data fabric strategy can be complete without incorporating the unique characteristics of the Mainframe. Consider that, for many enterprises, the Mainframe is the foundation for business, supporting 72% of the world’s mission-critical transaction processing at unparalleled speed and scale—it’s a gold mine for actionable data insights.

It’s possible to incorporate Mainframe data as another data source, but the most compelling data fabric architecture exploits the tightly coupled nature of Mainframe data with the native applications that generate them. Most data fabric architectures lean toward propagating raw Mainframe data off platform to combine with other sources for information generation and analytics. Yet, left behind is the application knowledge and business context built over decades that help define the meaning of the data. Context around core applications aids in data discovery and definitions that are fundamental characteristics of a high-performing data fabric architecture.

For instance, the Mainframe handles much of the transaction processing for large banking and insurance institutions, generating valuable data on claims, purchases, loans, etc. One large insurance customer, whose mission centers on best-in-class member services, has built a data fabric to help them improve response times during periods of expected inclement weather. Claims are processed on their Mainframe systems, so the customer needed ready access to this data to combine with weather forecasts in covered areas. They gained better insight into potential risks for a large increase in claims so they can proactively respond to their members.

No data fabric strategy can be complete without incorporating the unique characteristics of the Mainframe.

Despite the foundational role data plays in any data fabric strategy, it’s the applications built on top of the data that deliver insights and interact with consumers. The architecture must factor in the unique integration between data and applications on Mainframe systems. Without that consideration, understanding where applications exist and how to expose data with applied application logic become unscalable challenges.

In order to democratize data for secure access across the business—the mark of success for any data fabric—it is first essential to understand the underlying meaning of Mainframe data within the application context. The applications and data have evolved together, with the data being “interpreted” by the applications in non-standard ways. Luckily, advances in technology have found a way to enable access to this valuable information. Enter application programming interfaces (APIs).

APIs and Mainframe Access

These days, you can’t mention data and Mainframe in the same sentence without talking about APIs. APIs are the keys to unlocking real-time access to Mainframe data as part of a data fabric architecture.

Some APIs allow for a context-neutral form of data access, where you can see the data, understand what it is, and use it—but there’s a catch. All that data will be presented without context, hampering their utility and value, depending on the use case.

Other APIs make it possible to access Mainframe data through well-defined business logic. For example, a CICS transaction carries both application logic and data.

The best-case scenario combines an API in data fabric, where access to the data includes a metadata description of what the API is doing. This makes it possible to identify and delineate which services the specific data source provides, including the Mainframe. A data fabric architecture uses metadata to catalog and make sense of data collected from a myriad of sources. In the case of data originating on z/OS, it is so tightly tied to application logic that the metadata isn’t available without having the application access the data.

APIs also play a key role in enterprise modernization. They provide access to Mainframe application logic and data that developers can use in conjunction with their favorite integrated development environment (IDE). Developers can focus on innovation—building new applications with their IDE of choice or enhancing existing applications—with APIs to integrate deployment pipelines. This modernization empowers organizations to leverage existing business data to build new web, mobile, or custom applications, transforming internal and external customer user experiences.

Mainframe databases—from Broadcom’s IDMS and Datacom to IBM's Db2 for z/OS and IMS—are built with industry-standard open access in support of the most valuable use cases that integrate critical Mainframe data in real-time.
This approach enables modern developers to seamlessly generate APIs, including REST, for ease of access to mission-critical data stored in Mainframe databases. These APIs provide the core support needed for IDMS, Datacom, Db2 for z/OS, and IMS, and are a gold standard for any data fabric architecture.

From a security perspective, opening access to data in the right way minimizes risk. It’s true that the further data gets away from the original source, the less control you have over it. Many APIs, however, provide a way to access Mainframe data without compromising security. These APIs allow customers to selectively pull up only the data they need for a specific application when it's needed. The data remains in the Mainframe’s security-rich system of record, significantly reducing the risk of a potential breach. For even more security measures, APIs can include additional layers of protection with user access validation and limit access to just the data required for a particular application.

The Future of Data Fabric Architecture

Data fabric deployments are expected to quadruple by 2024, driving efficiency in data utilization and decreasing human-driven data management tasks by 50%.3

Organizations in this highly competitive market expect to gain an edge over the competition through data-driven decisions. The data sitting on the systems of record is arguably the most important component for informing those business and operational decisions. For example, with additional data assets made available through a data fabric, a supply chain leader can more rapidly—and more fully—understand the relationships between supplier delays and production challenges. This information gives the organization time to develop an appropriate response. In essence, a data fabric strategy helps enhance decisions with the right information, giving the supply chain leader time to procure new suppliers or customers.4

"... enterprises improve outcomes when data-driven decisions are made."

Another powerful tool is incorporating machine learning (ML) algorithms into a data fabric architecture. These algorithms can quickly comb through operational data to provide recommendations for action or, in some cases, act automatically. For example, automating remedial manual tasks frees up considerable time for employees to focus on other projects that provide higher value to the business. In addition, ML can be used to apply rules or standards on information shared across the data fabric. This benefit is advantageous in compliance situations where governance rules are applied to information across the data fabric, regardless of the location.

The Mainframe Advantage

In a competitive world that is data-driven, if not data-ruled, centering your data fabric strategy on the Mainframe’s data and the platform’s qualities of service can generate high reward.

As data volume continues to grow and the need for quality data goes up, how enterprises choose to manage their data will directly impact the customer experience—and ultimately drive better business outcomes.

So, serve up Mainframe data as the main course in your data fabric strategy and watch your business insights turn into a feast of growing value.

Who’s ready for dessert?

If you’d like to discuss the data fabric architecture or your data management strategy further, please contact me directly at Deborah.Carbo@broadcom.com.

  1. IDC. Data Creation and Replication Will Grow at a Faster Rate Than Installed Storage Capacity, According to the IDC Global DataSphere and StorageSphere Forecasts. Accessed Dec 20, 2022 https://www.businesswire.com/news/home/20210324005175/en/Data-Creation-and-Replication-Will-Grow-at-a-Faster-Rate-Than-Installed-Storage-Capacity-According-to-the-IDC-Global-DataSphere-and-StorageSphere-Forecasts

  2. Gartner. Gartner Identifies Top 10 Data and Analytics Technology Trends for 2019. Accessed Dec 20, 2022 https://www.gartner.com/en/newsroom/press-releases/2019-02-18-gartner-identifies-top-10-data-and-analytics-technolo

  3. Gartner. Striving to Become a Data-Driven Organization? Start With 5 Key D&A Initiatives. Accessed Dec 20, 2022 https://www.gartner.com/en/information-technology/insights/data-and-analytics-essential-guides

  4. Gartner. Data Fabric Architecture is Key to Modernizing Data Management and Integration. Accessed Dec 20, 2022 https://www.gartner.com/smarterwithgartner/data-fabric-architecture-is-key-to-modernizing-data-management-and-integration