Data Management and Artificial Intelligence
Managing Data With Efficency and Expediency
“Big data is at the foundation of all of the megatrends that are happening today, from social to mobile to the cloud to gaming.” Christoper Lynch – U.S. entrepreneur, executive chairman & CEO at AtScale.
“Mobility, cloud and big data all promise to help enterprises increase efficiency and productivity, improve decision-making and lower costs. The laudable goal is to make your business more competitive, but for your IT, legal and compliance teams, these new technologies often lead to increased complexity, loss of control and even increased costs as massive amounts of data now move to an ever-increasing number of endpoints, including mobile devices and third-party hosting services. These challenges can be overcome with a new approach to standardizing information metadata. … The strategy is based on applying the same metadata standardization typically used on structured databases to all other data across the enterprise, on-premises and in the cloud, including all message types (email, text and SMS messaging, social media, etc.), documents (word processing, spreadsheets, presentations, etc.), and even log files. In some regulated industries, such as financial services, metadata standardization could also be applied to voice communications data, such as recorded conversations and voicemail files.” Richard Kessler – U.S. AI Governance & Risk Management Consultant at Protiviti Global Business Consulting.
“Data management has never been more sophisticated. Cloud platforms scale instantly. Analytics tools promise real time insight. Artificial intelligence accelerates discovery.” From the article “Why Fundamentals Still Win In Data Management” posted February 27, 2026, on LinkedIn by Douglas Day – U.S. data management, governance and process improvement consultant.
“AI has become very effective at identifying interesting patterns and anomalies in data, something that was difficult to do earlier without building specialized analytics tools … Today, leaders can ask AI to predict outcomes based on existing data patterns and get answers in minutes instead of days.” Vaibhav Kumar Bajpai – U.S. software engineer, a group engineering manager at Microsoft Core AI quoted in the article “Ways AI supercharges risk awareness and data insights for CIOs”, posted March 10, 2026, on Information Week by John Edwards – U.S. business technology journalist.
“Today, every company is a data company. While they may be in the business of banking, health care or manufacturing, every business houses sensitive information, from customer data to employee records.” Julie Brill – U.S. computer/digital governance and regulatory executive, previously chief privacy officer, corporate vice president and deputy general counsel at Microsoft Corp., currently national adviser at the law firm Manatt, Phelps & Phillips, LLP.
“Vast mountains of often unstructured data can become easier to surmount with AI’s assistance. AI helps CIOs move faster by processing large volumes of data and accelerating insight into how business actually operates … The value comes when AI is applied to real business problems, not technology for its own sake.” One problem AI can help tackle is observability across the organization. When AI is grounded in identity and data security, leaders can see how people, systems, and data interact. … Since identity defines how employees show up, collaborate, and contribute, starting with identity allows CIOs to better understand risk, access, and behavior across the organization. … It’s important to remember that AI should never be used to replace people -- it should augment them. Humans bring context, intuition, and judgment, but they can’t analyze data at the same scale or speed as AI. AI can continuously process information and surface patterns, allowing users to focus on higher-order thinking, decision making, and problem-solving. … The biggest mistake CIOs make is using AI without establishing a clear business purpose or understanding how it will impact people. Some organizations focus too much on security controls or technology while losing sight of the employee experience. Other enterprises may move too fast, leading to the creation of shadow AI tools lacking appropriate visibility or governance. The right approach is to start with how people work, then layer in security and AI thoughtfully.” Michael Wetzel – U.S. system engineer, CIO at risk and compliance firm Netwrix quoted in the article “Ways AI supercharges risk awareness and data insights for CIOs”, posted March 10, 2026, on Information Week by John Edwards – U.S. business technology journalist.
“Customer service, internal efficiency, and automation are still important, but AI introduces a new dimension, and a new level of urgency to this, according to Graeme Thompson, CIO at AI-powered enterprise cloud data management solutions provider Informatica. ‘It’s one thing to miss out on the opportunity to automate an internal process. It’s a completely different and much more serious thing to miss out on being able to have an AI-assisted customer experience or a fraud detection process.’ One challenge with MDM (Master Data Management) is that it’s not as sexy as the application-layer stuff, so it can be difficult to allocate the necessary resources to make it happen. While MDM tools can help, there also needs to be a process change, which requires a different mindset. There is a mindset shift that must happen to get people to buy into the cost and the overhead of managing the data in a way that’s going to be usable, Thompson says. ‘It’s knowing how to match technology up with a set of business processes, internal culture, commitment to do things properly and tie [that] to a business outcome that makes sense,” he says. ‘[T]he level of maturity of some good companies is bad. They’re just bad at managing their data assets.’ … ‘[MDM] has very real business consequences, and I think that’s the part that we can all do better is to start talking about the business outcome, because these business outcomes are so serious and so easy to understand that it shouldn’t be hard to get business leaders behind it,’ says Thompson. “But if you try to get business leaders behind MDM, it sounds like you want to undertake a science project with their help. It’s not about the MDM, it’s about the business outcome that you can get if you do a great job at MDM. … ‘Everyone wants to bypass the MDM phase. Let’s just get the data right for this one project, and then inevitably, [it leads] to other problems,’ says Doug Gilbert, CIO and chief digital officer at business and digital transformation service and solutions provider Sutherland Global. “You’ve taken that contextual understanding, and now you’re doing AI, blindly follow[ing] that data and recommendations for you. Before, you could do a kind of quasi master data management around one or two projects and not think about it holistically.” From the article “Why Master Data Management Is Even More Important Now” posted August 19, 2025, on Information Week by Lisa Morgan – U.S. journalist.
“Effective AI systems often require data that runs afoul of traditional data management standards. Evolving those standards to suit AI means adopting new practices and policies — and new investments. Getting data right for AI is essential for CIOs to deliver successful outcomes from AI initiatives. That part is clear. What’s less clear is what that process entails given the nature of AI data use — and how to pay for the foundational work necessary to ensure the organization has data that’s “good” for AI. At issue is the face that AI makes use of data that many traditional applications don’t — and that the data best-suited for AI workflows isn’t always of the highest quality. Instead, what makes AI data “good” is that it fits the specifics of the business use cases and algorithms that use it. Consequently, it might be perfectly fine to use data that is incomplete or not “squeaky clean” — as long as it fits the use case. Should CIOs care about this data quandary? Yes, for two reasons: First, IT data analysts must be reoriented to produce the “right” data for AI, even if the data by traditional standards seems “wrong.” This will require revisions to data management work practices and some reorientation for data analysts tasked to working with AI. Second, any data work, whether for traditional apps or AI, takes time and resources. It is also infrastructure-level work that no one ‘on the outside’ — that is, the CEO and the C-level — sees tangible value in.” From the opinion piece “The AI Data Dilemma Every CIO Must Address” posted March 24, 2026, on CIO by Mary Shacklett – U.S. freelance writer and president of Transworld Data, a technology analytics, market research, and consultant firm.
“Enterprises are not short on AI ambition. What they lack is a governance model that keeps pace with how AI is actually being adopted. Across industries, CIOs are rolling out generative AI through SaaS platforms, embedded copilots, and third-party tools at a speed that traditional governance frameworks were never designed to handle. AI now influences customer interactions, hiring decisions, financial analysis, software development, and knowledge work — often without being formally deployed in the classical sense. The result is a widening gap between rapid AI deployment and responsible-use protections. Organizations adopt AI faster than they can govern its usage, then scramble to retrofit controls after something goes wrong. Interviews with five practitioners — each working at a different pressure point of enterprise AI — reveal why this gap persists and what leaders must do to close it before regulators, auditors, or customers force the issue. The first problem is structural. Governance was designed for centralized, slow-moving decisions. AI adoption is neither. Ericka Watson, CEO of consultancy Data Strategy Advisors and former chief privacy officer at Regeneron Pharmaceuticals, sees the same pattern across industries. ‘Companies still design governance as if decisions moved slowly and centrally,’ she said. ‘But that’s not how AI is being adopted. Businesses are making decisions daily — using vendors, copilots, embedded AI features — while governance assumes someone will stop, fill out a form, and wait for approval.’” From the “Why AI Adoption Keeps Outrunning Governance-And What To Do About It” posted February 2, 2026, on Computer World by Pat Brans – French academic, previously held senior positions with Computer Sciences Corporation, HP and Sybase.
“Generative AI (genAI) adoption is outpacing corporate governance of the technology, with 75% of companies using genAI but only a third having responsible controls in place, according to consulting firm Ernst & Young (EY). Though executives see the technology’s potential, about half admit their governance frameworks lag behind current and future AI needs. However, 50% are making major investments to address those gaps, EY’s “pulse survey” showed. The survey of 975 C-level executives across 21 countries included CEOs, CIOs, CTOs, CFOs, chief human resource officers, chief marketing officers and chief risk officers. Most C-suite leaders plan to use emerging genAI within a year, but risk awareness lags, EY said. For example, 76% plan to use agentic AI, yet only 56% understand the risks; 88% use synthetic data, but just 55% know the risks, the consultancy found.” From the article “GenAI Adoption Outpaces Governance, Ernst & Young Finds”, posted June 5, 2025, on Computer World by Lucas Mearian – U.S. journalist.
“There’s a meaningful gap between AI ambition and AI reality in most enterprises today. Pilots are running, and proofs of concept are impressive. But production deployment, particularly for agentic AI, keeps running into the same wall. The models are ready. The data infrastructure often isn’t. Understanding why requires knowing what agentic AI demands from your data environment, and why those demands are fundamentally different from anything enterprises have managed before. Traditional enterprise applications made predictable, bounded requests to databases. An agent does something far more dynamic: it plans, reasons and acts, often decomposing a single user request into dozens of parallel sub-tasks, each requiring fast, contextually rich access to live enterprise data. Relational records, unstructured documents, vector embeddings, graph relationships -- agents need all of it, simultaneously, with full context. Most enterprise databases weren’t designed for this. They were optimized for transactional reliability. When agents have to work with fragmented data stores or stale information, the core value proposition of autonomous AI -- speed, accuracy, independent action -- erodes quickly. I agents can generate arbitrary database queries – a risk most organizations overlook. Unlike a traditional application with a carefully controlled interface, an agent isn’t inherently constrained to only the data it should see. For decades, enterprises have managed data privacy primarily on the application layer. That worked when every data interaction flowed through a human-controlled UI. It breaks down when agents connect directly to databases, often with privileged credentials, operating at machine speed across sensitive systems. The agentic AI tooling ecosystem has grown faster than the standards to govern it. Most organizations experimenting with agents are stitching together a mix of vector databases, graph stores, document systems, open source frameworks and commercial memory services, each with different security models, different APIs and no consistent way to define agent behavior. The operational cost of this fragmentation compounds over time. When agent memory lives across three systems, you lose consistency and auditability. Agent workflows being defined differently across frameworks make portability nearly impossible. Access controls that vary across tools introduce compliance exposure that’s difficult to inventory, let alone remediate. Emerging standards such as MCP and A2A address parts of this problem. MCP defines how agents access tools, and A2A defines how agents communicate with each other, but until recently, nobody had tackled the internal structure and portability of agents themselves. That gap is increasingly cited as a blocker to moving from pilot to production.” From the article “The Database Is The New Battleground For Enterprise AI” posted March 24, 2026, on Tech Target by Stephen Catanzano – U.S. technology consultant.
“Like many companies, Ernst & Young and Lumen have been working to bring AI tools and services into their respective operations. But they’ve taken very different approaches to find success. With many AI projects failing, there’s no one-size-fits-all formula for advancing AI proofs of concept to real-world use in the corporate world. … EY, being in a regulated space of finance and tax, has embraced what it sees as a measured and responsible approach while managing the risks that come with rolling out new technology. Lumen has been more aggressive, working to create an AI culture at the company by giving all employees AI tools from day one. ‘There’s become a bifurcation [in approaches] …, some experimentation is innovation theater…, but you’re now starting to get to tangible use cases,’ said Joe Depa, global chief innovation officer at EY.” From the article “How Two Companies Are Moving AI Prototypes To Production” posted January 28, 2026, on Computer World by Agam Shah – U.S. journalist, academic, adjunct professor at the Walter Cronkite School of Journalism and Mass Communication at Arizona State University, UN consultant.
“Doug Gilbert, CIO and Chief Digital Officer at business and digital transformation service and solutions provider Sutherland Global says ‘You must make sure that [the data] feeding it is always clean … I do MDM because we go through so many different audits. It was painful, but I have less breakage, and my systems require less maintenance. I get proper AI outputs and proper predictions when I’m doing analytics. More importantly, my auditability is very easy to prove out because we have the proper controls in place.’ Louis Landry, CTO at cloud and analytics data platform provider for AI Teradata, says, ‘It definitely feels that we don’t necessarily want to talk about [MDM], but it’s very important and very necessary for the future we’re all planning to live in. What I’ve seen over the last several years is when you’re talking about data quality and data governance, folks might be willing to spend money on a technology tool, but they’re not willing to spend money on the process and people that are associated with it, and a lot of this is a people problem.’ In older organizations, MDM maturity tends to be unevenly distributed. The core data tends to be fairly well organized and managed, but the rest isn’t. The age-old problem of data ownership and a reticence to share data doesn’t help. ‘The notion of data mesh [is] I’ll manage this piece, and you manage that piece. We’ll be disconnected but we can connect, and you can use it, but don’t mess with it. It’s mine,’ says Landry. ‘We’ve known for decades that value acceleration comes when you integrate all this stuff so you can see inventory with customer data, sales data with revenue data -- the stuff where magic starts to happen when you bring all these things together. The most advanced organizations have subject matter experts for specific domains. It really improves the overall quality and accessibility of that information and allows data to be turned into knowledge.’ In the tech world, whether it’s networks or MDM, there are opposing trends that tend to arise, not the least of which is centralization and decentralization. ‘There’s always this back and forth between governance, control and accuracy versus autonomy and agility. I think we’ve been hard tilted towards autonomy and agility,’ says Landry.‘ ‘With things like generative AI and agents, it looks like we might get a chance at serving both principal needs because you can kind of separate the data management side of it and provide the right kind of governance and control that’s decoupled from all of the autonomy and agility that’s necessary at the consumption and analysis layers.’ He sees the data problem becoming more acute given that every app seems to have its own database and unique version of the ‘truth.’ ‘We’re going to see an unimaginable complexity crisis, and I think that fragmentation is something that we’re all going to have to deal with, and the practice of master data management is going to be incredibly important in dealing with that,’ says Landry.” From the article “Why Master Data Management Is Even More Important Now” posted August 19, 2025, on Information Week by Lisa Morgan – U.S. journalist.
“Isha Khatana, a machine learning engineer and data analyst says, ‘Real data is messy. Real impact comes from making sense of it anyway.’ So how do CIOs make sense of incomplete or garbled data? First, by explaining to AI stakeholders and management that the data AI uses is by no means “normal” in terms of the data quality standards that IT traditionally sets — and that the necessity of using less than perfect data for AI exists because AI must be fully informed with whatever data is “out there” and relevant if it is to have a full grasp of its subject domain. This explanation about how AI uses non-standard data is important because working with non-standard data is going to require a different set of data management practices and skills from data analysts who prepare data for AI. Consequently, the CEO and other business stakeholders will see new data preparation tasks pop up in AI projects, and these new tasks will consume time, resources, and dollars. Because most of these stakeholders see data preparation as non-value-added grunt work, they won’t like what they see. It will be up to the CIO to explain to stakeholders why AI requires working with different types of data that must be prepared differently. One way to impress the necessity of this data preparation “grunt work” is to point to the risks to the company if an AI system delivers faulty results because of an imperfect algorithm or data that wasn’t properly prepared. Define data preparation schemes tailored to each AI project. Each AI project is unique when it comes to data preparation — but there are some overall guidelines that can be applied. First comes the acknowledgment that, because of AI’s variegated data sources, some data incoming to AI will be less than perfect. An automated machine learning function that relies directly on the data it ingests, without necessarily screening that data for accuracy, is one example. Another example is an AI system that relies on sensor-generated data. In some cases, that data will be jitter — and it will need to be removed. In other cases, such as the modeling of a molecule for developing a vaccine, the incoming body of data from worldwide research might be so large that the pipeline for collecting that data must be purposefully narrowed only to research that specifically mentions the molecule being studied by name. This is AI governance work, and it requires a different set of data analysis skills that go beyond traditional extracting, loading, and transforming data — and into the assessment of different types of data within the AI context that the data is being used.” From the opinion piece “The AI Data Dilemma Every CIO Must Address” posted March 24, 2026, on CIO by Mary Shacklett – U.S. freelance writer and president of Transworld Data, a technology analytics, market research, and consultant firm.
“Advanced platforms assume strong inputs. When fundamentals are missing, technology accelerates confusion instead of insight. Fundamentals are not set once and forgotten. Continuous process improvement keeps them relevant. By reviewing how data is created, validated, and consumed, leaders prevent drift and reinforce discipline. Incremental improvements matter. A clarified definition. A refined validation rule. A documented ownership change. These small actions compound into stable, trusted data over time. … Fundamentals fail when they are delegated without accountability. Leaders set the tone by reinforcing that data quality matters and that shortcuts carry risk. When leadership stays engaged, fundamentals become part of how the organization operates. Choose one fundamental in your data environment that feels assumed rather than explicit. Clarify it, document it, and measure it. That single step can unlock more value than any new tool.” From the article “Why Fundamentals Still Win In Data Management” posted February 27, 2026, on LinkedIn by Douglas Day – U.S. data management, governance and process improvement consultant.
“CIOs should start with a basic but overlooked step: scrutinize vendor subprocessor lists. Cloud providers are well understood. LLM providers are not. AI has created a second, poorly mapped subprocessor layer — and that’s where governance breaks down. Technology controls alone do not close the responsible-AI gap. Behavior matters more. Asha Palmer, SVP of Compliance Solutions at Skillsoft and a former US federal prosecutor, is often called in after AI incidents. She says the first uncomfortable truth leaders confront is that the outcome was predictable. ‘We knew this could happen,’ she said. ‘The real question is: why didn’t we equip people to deal with it before it did?’ Pressure to perform is the root cause. Employees use AI to move faster and meet targets — just as they have in every compliance failure from bribery to data misuse. Blanket bans on genAI do not work. ‘If you take away responsible use,’ Palmer said, ‘people will use it irresponsibly — in secret, in ways you can’t govern.’ What to do: Shift from awareness training to behavioral learning. Palmer calls it ‘moral muscle memory,’ a scenario-based practice that teaches people to stop, assess risk, and choose a better action under pressure. Regulators and auditors look for evidence that the right people have received the right training for the risks they actually face. One-size-fits-all AI literacy is a red flag. The final gap appears when organizations are asked to prove their governance works. Danny Manimbo is ISO & AI Practice Leader at Schellman, an attestation and compliance services provider. He sees the same failure pattern repeatedly. ‘Organizations confuse having policies with having governance,’ he said. ‘Responsible AI principles don’t matter if they don’t influence real decisions.’ Auditors might start with a simple request: show us a documented AI risk-based decision that changed an outcome. Mature governance leaves fingerprints — including delayed deployments, rejected vendors, and constrained features. Immature governance produces vague assurances. ‘The most expensive governance work is the work you try to do after deployment,’ Manimbo warned. Walking back data lineage, accountability, and intended purpose is extraordinarily difficult once systems are live. What to do: Treat AI governance as a management system, not a compliance exercise. Standards like ISO/IEC 42001 work only when they connect risk management, change control, monitoring, and internal audit into a continuous loop. You can tell governance is working when it changes business decisions, not when it produces documentation. Across all five interviews, one theme recurs: the responsible AI gap is not primarily a technology failure. It’s a governance timing failure. Controls are being designed for yesterday’s systems while AI is already shaping today’s decisions. Several of the sources stressed that CIOs should stop framing responsible AI as a future-state program and start treating it as an operational hygiene issue — closer to identity management or financial controls than to ethics committees.” From the “Why AI Adoption Keeps Outrunning Governance-And What To Do About It” posted February 2, 2026, on Computer World by Pat Brans – French academic, previously held senior positions with Computer Sciences Corporation, HP and Sybase.
“The goal is to turn data into information, and information into insight.” Carly Fiorina – U.S. business executive, former CEO of Hewlett-Packard.
“In a world of more data, the companies with more data-literate people are the ones that are going to win.” Miro Kazakoff – U.S. academic, senior lecturer in Managerial Communication at the MIT Sloan School of Management.
“Of course hard numbers tell an important story; user stats and sales numbers will always be key metrics. But every day, your users are sharing a huge amount of qualitative data, too — and a lot of companies either don’t know how or forget to act on it.” Stewart Butterfield – Canadian business executive, former CEO and founder of Slack.
“Processed data is information. Processed information is knowledge. Processed knowledge is Wisdom.” Ankala V. Subbarao – Indian cardiologist.
.”Without data, you’re just another person with an opinion.” W. Edwards Deming – U.S. business theorist, economist, industrial engineer, management consultant.
“Data is a precious thing and will last longer than the systems themselves.” Tim Berners-Lee – U.K. computer scientist, best known as the inventor of the World Wide Web, HTML, the URL system, and HTTP, currently a professorial research fellow at the University of Oxford and a professor emeritus at the Massachusetts Institute of Technology.

