Governing AI, Unlocking Value: Secure Unstructured Data in an EU–U.S. World

Expert Webinar with HYLAND on March 24, 2026 | Speakers: Jonathan Whitear (AI specialist EMEA and APAC, HYLAND) and Carine Kriegenhofer, (Director, International Marketing, HYLAND)
Photo courtesy of HYLAND

Artificial intelligence is rapidly reshaping how organizations create value from data, yet its greatest untapped potential lies in unstructured information—documents, emails, contracts, records, images, and other content that traditionally has been stored rather than analyzed. An estimated 80% of enterprise data is unstructured, held across silos and rarely examined beyond its original purpose. While this has not historically been seen as a critical problem, it represents a major unrealized opportunity at a time when AI capabilities have matured dramatically.

The urgency is clear: organizations are under pressure to generate measurable ROI from AI investments, while simultaneously navigating growing regulatory and governance expectations in both the EU and the United States. Rather than experimenting with AI in the abstract, early success stories consistently start with a clearly defined business outcome, an identified need, and a known source of unstructured data. In contrast, “technology-first” initiatives—testing AI and then searching for a use case—frequently stall due to unclear value or insufficient data readiness.

The most impactful early use cases tend to focus on mundane, repetitive, or previously impractical tasks, such as document analysis, information extraction, classification, summarization, and compliance-related workflows. These use cases may appear unglamorous, but they deliver tangible benefits, freeing knowledge workers to focus on higher-value activities. A recurring challenge, however, is ensuring that outcomes are meaningful to end users. Definable business value—measured against the full cost of discovery, development, and operation—remains one of the most overlooked success factors in AI initiatives.

Looking ahead, the AI landscape is evolving along several parallel paths. Generative AI is increasingly accessible to all employees, enhancing everyday tasks by adding broader context and insight. AI agents enable repeatable, autonomous execution of well-defined tasks, while agentic AI points toward the automation of entire processes, operating with greater delegated authority under observation and guardrails—shifting from rigid procedures to systems that resemble adaptive “minds” rather than static SOPs. At the same time, AI readiness for data scientists working with unstructured data remains a critical enabler. As the technology is still at the earliest stage of adoption on Geoffrey Moore’s curve, much of its future potential remains unwritten, with early momentum driven primarily by the need for demonstrable ROI.

Across regions and industries, the most common question organizations ask is: “Where do we start?” Discovery workshops reveal that discussions often focus more on change management and employee adoption than on defining concrete use cases. Many valuable unstructured data assets are owned by teams that rarely engage with technology strategy, meaning the possibilities of AI have not yet been fully considered. When a clear use case is finally defined, the results are often transformative—not because of the technology itself, but because of the outcome achieved, typically through a combination of AI and existing systems.

However, unlocking this value requires careful risk management. Beyond obvious concerns such as underuse or overreliance on AI outputs, less visible risks include document governance, access control, outdated or misclassified information, user error leading to data exposure, and the growing challenge of “AI technical debt” as systems struggle to keep pace with rapid change. Governance frameworks, or guardrails, are therefore essential. These include technical controls, role-based access, usage policies, training, and documentation requirements, ensuring AI systems operate within defined parameters while remaining auditable and compliant.

Responsibility is shared but ultimately rests with organizations. In the EU, the EU AI Act places clear obligations on employers to ensure correct use, proper training, and traceability of AI-assisted decisions. Similar principles are emerging in the U.S., building on existing data protection, privacy, and intellectual property frameworks. While data sovereignty and local infrastructure can play a role, AI governance should be treated with the same rigor as any core enterprise system.

In conclusion, the greatest gains from AI today are not found in extreme or speculative use cases, but in systematically applying AI to everyday unstructured data. Organizations that allow business units to identify real problems first, apply technology second, and govern usage responsibly will be best positioned to turn compliance into competitive advantage in an increasingly complex EU–U.S. regulatory environment.

For more detailed information please contact:

Heather Liermann

Head of Department

Membership Engagement & Development