Schedule

New times: From Data for AI to AI for Data

We are used to managing data before deploying AI: carefully collecting, cleaning and structuring it. But that is changing. AI now helps to improve data itself: automatically enriching, validating, integrating and documenting it. We are moving from static management to dynamic improvement: AI brings data to life and changes how we deal with it
Read more

We are used to managing data before deploying AI: carefully collecting, cleaning and structuring it. But that is changing. AI now helps to improve data itself: automatically enriching, validating, integrating and documenting it. We are moving from static management to dynamic improvement: AI brings data to life and changes how we deal with it

Topics and discussion points:

  • Data becomes a living, self-learning system through AI.
  • Errors and missing data can be automatically detected and corrected.
  • Data sources will soon merge automatically.
  • Classification, documentation and compliance are increasingly supported in real time.
  • The promise of less manual work seems within reach, but how far are we really from achieving it?
Read less

Grounded AI in Data Warehousing: How to Make Your LLM Stop Lying

Hallucinations from AI can destroy trust in BI outputs. This live, technical session walks through building an LLM-powered analytics assistant that only answers from governed, verified data.
Read more

Hallucinations from AI can destroy trust in BI outputs. This live, technical session walks through building an LLM-powered analytics assistant that only answers from governed, verified data. Using Snowflake Cortex Semantic Models, Cortex Analyst, and Cortex Search, we’ll map business terms to actual definitions, auto-generate safe SQL and trace every step for auditability. You’ll see the full stack in action, with architecture diagrams and code patterns you can implement.

Key takeaways:

  • How to ground LLMs in your semantic layer
  • How to integrate text-to-SQL safely in BI workflows
  • How to make AI outputs traceable and defensible
  • How to connect unstructured docs and structured data in one system
  • How to design observability into AI so you know when it fails.
Read less

A Holistic Data Architecture: From Source to Insight

Organizations face complex data challenges that common architectures—data warehouses, lakes, lakehouses, and fabrics—only partly solve. A holistic architecture covering the full data journey, from source to insight, is needed, with these architectures serving as components of a larger whole.
Read more

Many organizations struggle with complex data challenges. Examples include tracking data usage (both transactional and analytical), properly managing and maintaining historical data, synchronizing source systems, reconstructing events (operational lineage), making data and reports accessible via metadata, streamlining data exchange, and preparing data for AI applications.
Often, the solution is sought in reference architectures based on, for example, a data warehouse, data lake, data lakehouse, or data fabric. While valuable, these architectures do not fully address the challenges mentioned above. They focus only on part of the data journey and fail to solve the core problems.
To truly tackle these challenges, a data architecture must cover the entire data journey: from source to insight. Only a holistic approach can achieve this. During this session, we will discuss a data architecture that spans the full data journey. The previously mentioned architectures may play a role within that architecture, but only as components of a larger whole.

This session will cover, among other topics:

  • Three types of IT systems: source systems, compensation systems and analytical systems
  • The positioning of data warehouses, data lakes, and data lakehouses as compensation systems
  • An overview of the Delta data architecture
  • How source systems can be made future-proof by “wrapping” them with additional modules
  • The importance of abstraction and data minimization within a data architecture
  • The role of metadata as the driving force behind a modern data shop.
Read less

Beyond Hive: Navigating the Open Table Format Revolution in Modern Data Lakes

The data lake world is shifting from Hive to formats like Iceberg, Hudi, Delta Lake and DuckDB. This session offers practical guidance on schema evolution, time travel, ACID and metadata, highlighting pros, pitfalls and costs so you can choose the right format with confidence.
Read more

The data lake landscape is undergoing a fundamental transformation. Traditional Hive tables are giving way to a new generation of open table formats—Apache Iceberg, Apache Hudi, Delta Lake, and emerging contenders like DuckDB—each promising to solve the inherent challenges of managing massive datasets at scale.
But which format fits your architecture? This session cuts through the marketing noise to deliver practical insights for data architects and engineers navigating this critical decision. We’ll explore how these formats tackle schema evolution, time travel, ACID transactions, and metadata management differently, and what these differences mean for your data platform’s performance, reliability, and total cost of ownership.
Drawing from real-world implementations, you’ll discover the hidden complexities, unexpected benefits, and common pitfalls of each approach. Whether you’re modernizing legacy Hive infrastructure, building greenfield data lakes, or evaluating lakehouse architectures, you’ll leave with a clear framework for choosing and implementing the right open table format for your specific use case—and the confidence to justify that decision to stakeholders.

Highlights:

  • Format Face-Off: Direct comparison of Hive, Iceberg, Hudi, Delta Lake, and Ducklake capabilities across critical dimensions including ACID guarantees, partition evolution, and query performance optimization
  • Real-World Battle Scars: Lessons learned from production deployments including migration strategies, performance tuning insights, and cost implications at petabyte scale
  • Ecosystem Integration Deep-Dive: How each format plays with modern compute engines (e.g. Spark, Flink, Trino, Presto, DuckDB) and cloud platforms, plus vendor lock-in considerations
  • The Hidden Costs: Beyond storage and compute—examining operational overhead, team expertise requirements, and long-term maintenance implications of your format choice
  • Decision Framework: A practical methodology for evaluating which open table format aligns with your organization’s data architecture, workload patterns, and strategic goals.
Read less

Modeling the Mesh: understanding data products and domains [English spoken]

Data Mesh is something every organization talks about but few are actually doing. This framework for federated data management and governance promises a lot and demands even more, but some elements in the paradigm can be beneficial for every organization: especially, data products & domains. Thinking in terms of products and domains also demands new approaches from data modeling - join this session to find out how data modeling works in the Data Mesh!
Read more

Data Mesh, coined by Zhamak Dehghani, is a framework for federated data management and governance that gets a lot of attention from large organizations around the world facing problems with bottlenecked data teams and sprawling solution spaces. While the core principles of Data Mesh are well established in the literature, and practical implementation stories have started to emerge giving meat to the theoretical bones, some questions remain.
One of the biggest challenges is managing business context across multiple domains and data products. In this session, we will discuss how data modeling can be used to enable both within-domain design of understandable and discoverable data products as well as cross-domain understanding of domain boundaries, overlaps, and possibly conflicting business concepts. The well-known best practices of conceptual and logical data models prove their worth in this modern de-centralized framework by enabling semantic interoperability across different data products and domains, as well as allowing the organization to maintain a big picture of their data contents.

Topics and discussion points:

  • Data Mesh basics – the paradigm and its principles
  • Pros and cons of the Mesh approach
  • Why context matters – semantic interoperability
  • Data modeling as the key to context and semantics
  • Modeling at two levels – data products and domains
  • Maintaining the big picture and why the “Enterprise Data Model” is still a valid topic.
Read less

Data Governance Sprint: Kick-Start Governance in Just Weeks

A five-week Data Governance Sprint uses a structured, workshop-driven method to cut debate, align stakeholders, and deliver practical results fast. It establishes clear roles, a business glossary, an operating model, and early wins that build momentum for data governance.
Read more

Are your data governance efforts stuck in endless debate cycles or only looks good on paper, with little to show for it? The Data Governance Sprint™ is a proven, accelerated method to establish practical data governance foundations in just five weeks. This session introduces a structured, workshop-based approach that moves beyond theory and delivers tangible outcomes: clear roles, a business glossary, an operating model, and early wins that build momentum. Designed for data leaders and practitioners, this methodology helps you overcome alignment struggles, engage stakeholders, and demonstrate measurable progress—fast.

  • Understand why governance initiatives often fail—from lack of momentum to disengaged stakeholders—and how a sprint approach addresses these barriers
  • Explore the 5-week Data Governance Sprint™ structure—a time-boxed, repeatable method to design, build, and validate core governance components
  • Learn practical facilitation techniques to turn unproductive meetings into high-impact workshops that align business and IT around shared goals
  • Prototype and test minimum viable governance deliverables such as a business glossary, lightweight operating model, and data stewardship roles
  • See real-world applications and lessons learned from organizations that applied the sprint to accelerate adoption, avoid pitfalls, and sustain change
  • Leave with an actionable roadmap for launching or rebooting a governance initiative in your own organization, showing quick wins while laying a foundation for long-term success.
Read less

The Data Administration: essential foundation for effective data management

Organisations increasingly rely on data but often lack a clear understanding of what they actually manage: insight is missing, datasets are poorly mapped, and metadata is scattered. A solid data administration provides structure and clarity.
Read more

Organisations increasingly rely on data but often lack a clear understanding of what they actually manage: insight is missing, datasets are poorly mapped, and metadata is scattered. A solid data administration provides structure and clarity. In this session, you will discover why this foundation is indispensable — and how to build it in practice.

Topics covered in this session:

  • Why a Data Administration is just as essential as financial bookkeeping
  • How to design a clear, organisation-wide metamodel
  • Practical setup of processes, roles, and governance
  • How tools (catalogs, metadata platforms) actually work — and when they don’t
  • Starting step by step: from use-case-driven approach to sustainable implementation.
Read less

Concept Modelling with Normal People – Five Key Lessons From 45 Years of Modelling [English spoken]

Alec Sharp shares simple and timeless techniques to improve any sort of business modelling, not just concept or data modeling, and increase engagement with your business partners.
Read more

Our speaker built his first concept model in 1979. It wasn’t very good. In fact, it looked like a hierarchical IMS physical database design. Eventually, over many modelling assignments around the globe, in every kind of organisation and culture, a small number of core principles emerged for effective modelling. All revolve around the idea we’re modelling for people, not machines. It turns out, even in the age of AI, virtual work, misinformation, and constantly changing technology, these lessons are proving to be just important as – or even more important than – ever. After all, we’re only human.

1. Data Modelling doesn’t matter (at first) – just start with a nice conversation.
2. Getting to the essence – “What” versus “Who, How, and other distractions.”
3. Good things come to those who wait – why patience is a virtue.
4. Be fearless, and play to your strengths – vulnerability and ignorance.
5. Every picture tells a story, except those that don’t – hire a graphic designer.
6. Bonus – your concept model is good for so much more than “data.”

Read less

New times: From Data for AI to AI for Data

We are used to managing data before deploying AI: carefully collecting, cleaning and structuring it. But that is changing. AI now helps to improve data itself: automatically enriching, validating, integrating and documenting it. We are moving from static management to dynamic improvement: AI brings data to life and changes how we deal with it
Read more

We are used to managing data before deploying AI: carefully collecting, cleaning and structuring it. But that is changing. AI now helps to improve data itself: automatically enriching, validating, integrating and documenting it. We are moving from static management to dynamic improvement: AI brings data to life and changes how we deal with it

Topics and discussion points:

  • Data becomes a living, self-learning system through AI.
  • Errors and missing data can be automatically detected and corrected.
  • Data sources will soon merge automatically.
  • Classification, documentation and compliance are increasingly supported in real time.
  • The promise of less manual work seems within reach, but how far are we really from achieving it?
Read less

Grounded AI in Data Warehousing: How to Make Your LLM Stop Lying

Hallucinations from AI can destroy trust in BI outputs. This live, technical session walks through building an LLM-powered analytics assistant that only answers from governed, verified data.
Read more

Hallucinations from AI can destroy trust in BI outputs. This live, technical session walks through building an LLM-powered analytics assistant that only answers from governed, verified data. Using Snowflake Cortex Semantic Models, Cortex Analyst, and Cortex Search, we’ll map business terms to actual definitions, auto-generate safe SQL and trace every step for auditability. You’ll see the full stack in action, with architecture diagrams and code patterns you can implement.

Key takeaways:

  • How to ground LLMs in your semantic layer
  • How to integrate text-to-SQL safely in BI workflows
  • How to make AI outputs traceable and defensible
  • How to connect unstructured docs and structured data in one system
  • How to design observability into AI so you know when it fails.
Read less

A Holistic Data Architecture: From Source to Insight

Organizations face complex data challenges that common architectures—data warehouses, lakes, lakehouses, and fabrics—only partly solve. A holistic architecture covering the full data journey, from source to insight, is needed, with these architectures serving as components of a larger whole.
Read more

Many organizations struggle with complex data challenges. Examples include tracking data usage (both transactional and analytical), properly managing and maintaining historical data, synchronizing source systems, reconstructing events (operational lineage), making data and reports accessible via metadata, streamlining data exchange, and preparing data for AI applications.
Often, the solution is sought in reference architectures based on, for example, a data warehouse, data lake, data lakehouse, or data fabric. While valuable, these architectures do not fully address the challenges mentioned above. They focus only on part of the data journey and fail to solve the core problems.
To truly tackle these challenges, a data architecture must cover the entire data journey: from source to insight. Only a holistic approach can achieve this. During this session, we will discuss a data architecture that spans the full data journey. The previously mentioned architectures may play a role within that architecture, but only as components of a larger whole.

This session will cover, among other topics:

  • Three types of IT systems: source systems, compensation systems and analytical systems
  • The positioning of data warehouses, data lakes, and data lakehouses as compensation systems
  • An overview of the Delta data architecture
  • How source systems can be made future-proof by “wrapping” them with additional modules
  • The importance of abstraction and data minimization within a data architecture
  • The role of metadata as the driving force behind a modern data shop.
Read less

Beyond Hive: Navigating the Open Table Format Revolution in Modern Data Lakes

The data lake world is shifting from Hive to formats like Iceberg, Hudi, Delta Lake and DuckDB. This session offers practical guidance on schema evolution, time travel, ACID and metadata, highlighting pros, pitfalls and costs so you can choose the right format with confidence.
Read more

The data lake landscape is undergoing a fundamental transformation. Traditional Hive tables are giving way to a new generation of open table formats—Apache Iceberg, Apache Hudi, Delta Lake, and emerging contenders like DuckDB—each promising to solve the inherent challenges of managing massive datasets at scale.
But which format fits your architecture? This session cuts through the marketing noise to deliver practical insights for data architects and engineers navigating this critical decision. We’ll explore how these formats tackle schema evolution, time travel, ACID transactions, and metadata management differently, and what these differences mean for your data platform’s performance, reliability, and total cost of ownership.
Drawing from real-world implementations, you’ll discover the hidden complexities, unexpected benefits, and common pitfalls of each approach. Whether you’re modernizing legacy Hive infrastructure, building greenfield data lakes, or evaluating lakehouse architectures, you’ll leave with a clear framework for choosing and implementing the right open table format for your specific use case—and the confidence to justify that decision to stakeholders.

Highlights:

  • Format Face-Off: Direct comparison of Hive, Iceberg, Hudi, Delta Lake, and Ducklake capabilities across critical dimensions including ACID guarantees, partition evolution, and query performance optimization
  • Real-World Battle Scars: Lessons learned from production deployments including migration strategies, performance tuning insights, and cost implications at petabyte scale
  • Ecosystem Integration Deep-Dive: How each format plays with modern compute engines (e.g. Spark, Flink, Trino, Presto, DuckDB) and cloud platforms, plus vendor lock-in considerations
  • The Hidden Costs: Beyond storage and compute—examining operational overhead, team expertise requirements, and long-term maintenance implications of your format choice
  • Decision Framework: A practical methodology for evaluating which open table format aligns with your organization’s data architecture, workload patterns, and strategic goals.
Read less

Modeling the Mesh: understanding data products and domains [English spoken]

Data Mesh is something every organization talks about but few are actually doing. This framework for federated data management and governance promises a lot and demands even more, but some elements in the paradigm can be beneficial for every organization: especially, data products & domains. Thinking in terms of products and domains also demands new approaches from data modeling - join this session to find out how data modeling works in the Data Mesh!
Read more

Data Mesh, coined by Zhamak Dehghani, is a framework for federated data management and governance that gets a lot of attention from large organizations around the world facing problems with bottlenecked data teams and sprawling solution spaces. While the core principles of Data Mesh are well established in the literature, and practical implementation stories have started to emerge giving meat to the theoretical bones, some questions remain.
One of the biggest challenges is managing business context across multiple domains and data products. In this session, we will discuss how data modeling can be used to enable both within-domain design of understandable and discoverable data products as well as cross-domain understanding of domain boundaries, overlaps, and possibly conflicting business concepts. The well-known best practices of conceptual and logical data models prove their worth in this modern de-centralized framework by enabling semantic interoperability across different data products and domains, as well as allowing the organization to maintain a big picture of their data contents.

Topics and discussion points:

  • Data Mesh basics – the paradigm and its principles
  • Pros and cons of the Mesh approach
  • Why context matters – semantic interoperability
  • Data modeling as the key to context and semantics
  • Modeling at two levels – data products and domains
  • Maintaining the big picture and why the “Enterprise Data Model” is still a valid topic.
Read less

Data Governance Sprint: Kick-Start Governance in Just Weeks

A five-week Data Governance Sprint uses a structured, workshop-driven method to cut debate, align stakeholders, and deliver practical results fast. It establishes clear roles, a business glossary, an operating model, and early wins that build momentum for data governance.
Read more

Are your data governance efforts stuck in endless debate cycles or only looks good on paper, with little to show for it? The Data Governance Sprint™ is a proven, accelerated method to establish practical data governance foundations in just five weeks. This session introduces a structured, workshop-based approach that moves beyond theory and delivers tangible outcomes: clear roles, a business glossary, an operating model, and early wins that build momentum. Designed for data leaders and practitioners, this methodology helps you overcome alignment struggles, engage stakeholders, and demonstrate measurable progress—fast.

  • Understand why governance initiatives often fail—from lack of momentum to disengaged stakeholders—and how a sprint approach addresses these barriers
  • Explore the 5-week Data Governance Sprint™ structure—a time-boxed, repeatable method to design, build, and validate core governance components
  • Learn practical facilitation techniques to turn unproductive meetings into high-impact workshops that align business and IT around shared goals
  • Prototype and test minimum viable governance deliverables such as a business glossary, lightweight operating model, and data stewardship roles
  • See real-world applications and lessons learned from organizations that applied the sprint to accelerate adoption, avoid pitfalls, and sustain change
  • Leave with an actionable roadmap for launching or rebooting a governance initiative in your own organization, showing quick wins while laying a foundation for long-term success.
Read less

The Data Administration: essential foundation for effective data management

Organisations increasingly rely on data but often lack a clear understanding of what they actually manage: insight is missing, datasets are poorly mapped, and metadata is scattered. A solid data administration provides structure and clarity.
Read more

Organisations increasingly rely on data but often lack a clear understanding of what they actually manage: insight is missing, datasets are poorly mapped, and metadata is scattered. A solid data administration provides structure and clarity. In this session, you will discover why this foundation is indispensable — and how to build it in practice.

Topics covered in this session:

  • Why a Data Administration is just as essential as financial bookkeeping
  • How to design a clear, organisation-wide metamodel
  • Practical setup of processes, roles, and governance
  • How tools (catalogs, metadata platforms) actually work — and when they don’t
  • Starting step by step: from use-case-driven approach to sustainable implementation.
Read less

Concept Modelling with Normal People – Five Key Lessons From 45 Years of Modelling [English spoken]

Alec Sharp shares simple and timeless techniques to improve any sort of business modelling, not just concept or data modeling, and increase engagement with your business partners.
Read more

Our speaker built his first concept model in 1979. It wasn’t very good. In fact, it looked like a hierarchical IMS physical database design. Eventually, over many modelling assignments around the globe, in every kind of organisation and culture, a small number of core principles emerged for effective modelling. All revolve around the idea we’re modelling for people, not machines. It turns out, even in the age of AI, virtual work, misinformation, and constantly changing technology, these lessons are proving to be just important as – or even more important than – ever. After all, we’re only human.

1. Data Modelling doesn’t matter (at first) – just start with a nice conversation.
2. Getting to the essence – “What” versus “Who, How, and other distractions.”
3. Good things come to those who wait – why patience is a virtue.
4. Be fearless, and play to your strengths – vulnerability and ignorance.
5. Every picture tells a story, except those that don’t – hire a graphic designer.
6. Bonus – your concept model is good for so much more than “data.”

Read less

Data Mesh Information Architecture: modeling data products and domains [English spoken]

This workshop addresses information architecture in decentralized data environments. It examines how domains document and share data, explores conceptual and logical modeling for clarity and interoperability, and provides practical exercises to design data products aligned with domain semantics.
Read more

Data Mesh is a federated approach to data management and governance developed by Zhamak Dehghani. It’s structure is based on domains and data products, elements that have also seen wide attention from organizations that are not otherwise working towards a full Mesh implementation. Working with autonomous domains who share data to the rest of the organization via data products is an excellent way to bring data work closer to the business and to allow domain-specific prioritization instead of a massive centralized bottleneck team. However, with domains having their own understanding of business and its core concepts, semantic interoperability becomes a challenge. This workshop focuses on the problems of Information Architecture in a de-centralized landscape. How can we document what data we have available, how do we understand what other teams’ data means, and how do we maintain a big picture of what is where? We will explore conceptual modeling as a key method of documenting the business context and semantics of domains and data products, more detailed logical modeling as a means to document data product structures, and consider both within-domain and cross-domain linking of various models and objects in them. As a hands-on exercise, we will model a domain and design some example data products that maintain strong links with their domain-level semantics. The workshop will give you the basic skills to do data modeling at these higher levels of abstraction, and understanding of the key characteristics and challenges of the Data Mesh that affect the way we need to do data modeling.

Learning objectives

  • Understand the basics of the Data Mesh paradigm and its challenges relating to information architecture and semantics
  • Learn the basics of conceptual modeling as a method of defining the business context of domains and data products
  • Learn the basics of logical modeling as a part of data product design process
  • Learn how solution-level metadata (e.g. data contracts) can expose domain-level context across domain boundaries
  • Understand the basic operating model of information architecture management in the context of independent domain teams within a Data Mesh setup

 

Who is it for

  • Data Architects
  • Chief Data Officers and Heads of Data interested in federated operating models
  • Data Product Owners and Team Leads working in a federated model
  • Data Governance experts

 

Detailed Course Outline

1. Introduction

  • Welcome and introductions
  • Course agenda and goals

 

2. Data Mesh basics

  • General idea
  • Four pillars of Data Mesh according to Dehghani
  • Domains and domain teams
  • Data products
  • The interoperability challenge

 

3. How conceptual models help with cross-domain understanding

  • Basics of conceptual modeling: entities, relationships, and attributes
  • How to identify the real business objects
  • Building definitions and glossaries

 

4. Hands-on exercise: modeling a domain

  • Domain boundaries
  • Identifying entities within the domain
  • Definitions and “domain ontology”

 

5. Data modeling as part of data product design

  • Understanding product scope as part of the domain model
  • Logical model as product-level design & documentation
  • Deriving logical models from conceptual model
  • Maintaining links with the domain model
  • What happens when the product expands beyond the domain?

 

6. Ensuring semantic interoperability at the domain boundary

  • Exposing metadata from domains and data products
  • Data contract basics
  • Domain glossaries vs. shared enterprise glossaries
  • Dealing with polysemes

 

7. Data Mesh information architecture operating model

  • Domain team responsibilities
  • Data product owner responsibilities
  • Platform team responsibilities
  • Federated governance

 

8. Conclusion

  • Key takeaways
  • Where to start in your organization
  • How to learn more
Read less

Grounded AI in Data Warehousing: How to Make Your LLM Stop Lying [English spoken]

Hallucinations from AI can destroy trust in BI outputs. This live, half day technical hands-on workshop walks through building an LLM-powered analytics assistant that only answers from governed, verified data. Contains exercises that you will run from your own laptop.
Read more

Input will follow shortly.

Practical hands-on workshop with exercises that you will run on your own laptop.

Read less

Concept Modelling for Business Analysts [English spoken]

Concept Modelling (or Conceptual Data Modelling) has seen an amazing resurgence of popularity in recent years, and Alec Sharp illustrates the many reasons for this along with practical techniques and guidelines to ensure useful models and business engagement.
Read more

Whether you call it a conceptual data model, a domain model, a business object model, or even a “thing model,” the concept model is seeing a worldwide resurgence of interest. Why? Because a concept model is a fundamental technique for improving communication among stakeholders in any sort of initiative. Sadly, that communication often gets lost – in the clouds, in the weeds, or in chasing the latest bright and shiny object. Having experienced this, Business Analysts everywhere are realizing Concept Modelling is a powerful addition to their BA toolkit. This session will even show how a concept model can be used to easily identify use cases, user stories, services, and other functional requirements. 

Realizing the value of concept modelling is also, surprisingly, taking hold in the data community. “Surprisingly” because many data practitioners had seen concept modelling as an “old school” technique. Not anymore! In the past few years, data professionals who have seen their big data, data science/AI, data lake, data mesh, data fabric, data lakehouse, etc. efforts fail to deliver expected benefits realise it is because they are not based on a shared view of the enterprise and the things it cares about. That’s where concept modelling helps. Data management/governance teams are (or should be!) taking advantage of the current support for Concept Modelling. After all, we can’t manage what hasn’t been modelled!

The Agile community is especially seeing the need for concept modelling. Because Agile is now the default approach, even on enterprise-scale initiatives, Agile teams need more than some user stories on Post-its in their backlog. Concept modelling is being embraced as an essential foundation on which to envision and develop solutions. In all these cases, the key is to see a concept model as a description of a business, not a technical description of a database schema. 

This workshop introduces concept modelling from a non-technical perspective, provides tips and guidelines for the analyst, and explores entity-relationship modelling at conceptual and logical levels using techniques that maximise client engagement and understanding. We’ll also look at techniques for facilitating concept modelling sessions (virtually and in-person), applying concept modelling within other disciplines (e.g., process change or business analysis,) and moving into more complex modelling situations. 

Drawing on over forty years of successful consulting and modelling, on projects of every size and type, this session provides proven techniques backed up with current, real-life examples.

Topics include:

  • The essence of concept modelling and essential guidelines for avoiding common pitfalls
  • Methods for engaging our business clients in conceptual modelling without them realizing it
  • Applying an easy, language-oriented approach to initiating development of a concept model
  • Why bottom-up techniques often work best
  • “Use your words!” – how definitions and assertions improve concept models
  • How to quickly develop useful entity definitions while avoiding conflict
  • Why a data model needs a sense of direction
  • The four most common patterns in data modelling, and the four most common errors in specifying entities
  • Making the transition from conceptual to logical using the world’s simplest guide to normalisation
  • Understand “the four Ds of data modelling” – definition, dependency, demonstration, and detail
  • Tips for conducting a concept model/data model review presentation
  • Critical distinctions among conceptual, logical, and physical models
  • Using concept models to discover use cases, business events, and other requirements
  • Interesting techniques to discover and meet additional requirements
  • How concept models help in package implementations, process change, and Agile development

 

Learning Objectives:

  • Understand the essential components of a concept model – things (entities) facts about things (relationships and attributes) and rules
  • Use entity-relationship modelling to depict facts and rules about business entities at different levels of detail and perspectives, specifically conceptual (overview) and logical (detailed) models
  • Apply a variety of techniques that support the active participation and engagement of business professionals and subject matter experts
  • Develop conceptual and logical models quickly using repeatable and Agile methods
  • Draw an Entity-Relationship Diagram (ERD) for maximum readability
  • Read a concept model/data model, and communicate with specialists using the appropriate terminology.
Read less

AI Governance, Responsible AI and Data Governance: Connecting the Dots

Unlock the future of trustworthy AI. This seminar helps you connect AI Governance, Responsible AI, and Data Governance into one actionable framework. Learn how to mitigate risks, ensure compliance, and embed transparency, fairness, and accountability in AI initiatives. Packed with real-world examples and practical tools, this session is ideal for leaders who want to align innovation with ethics and create lasting business value.
Read more

Artificial intelligence promises transformative business value, but without strong governance foundations, AI initiatives risk being biased, opaque, or non-compliant. Organizations are increasingly expected—by regulators, customers, and society at large—to ensure AI systems are ethical, explainable, and trustworthy. Yet, most governance efforts remain fragmented: AI governance is treated separately from Responsible AI principles, while Data Governance operates in a silo.
This seminar connects the dots. Participants will gain a comprehensive understanding of how Data Governance underpins Responsible AI, and how AI Governance frameworks operationalize ethics and compliance in practice. Combining strategy, case studies, and hands-on frameworks, the course provides attendees with the tools to design and implement governance approaches that make AI not only innovative, but also reliable and responsible.

Learning Objectives

By the end of this seminar, participants will be able to:

  • Understand the principles of AI Governance and the importance of Responsible AI
  • Explore the role of Data Governance in supporting ethical and compliant AI practices
  • Learn how to develop and implement AI Governance frameworks that align with organizational goals
  • Discover best practices for ensuring transparency, accountability, and fairness in AI systems
  • Examine real-world case studies that highlight successful AI and data governance integration
  • Gain practical tools and techniques for fostering a culture of responsible AI and data management
  • Identify common challenges and strategies to overcome them in the governance of AI and data.

 

Who is it for?

  • Data & AI Leaders: Chief Data Officers, AI program leads, and executives responsible for data-driven strategy
  • Governance & Compliance Professionals: Data Governance managers, risk officers, and compliance teams seeking to embed AI accountability
  • Technology Leaders: Architects, product owners, and IT leaders building or overseeing AI/ML solutions
  • Business Leaders & Policy Makers: Executives and decision-makers needing to ensure AI aligns with organizational goals and ethical standards
  • Researchers & Educators: Professionals in higher education and research institutions deploying AI in sensitive, high-stakes contexts.

 

Detailed Course Outline

Part 1 — Foundations & Risks

  • Session 1: AI Primer – The growth of AI, opportunities, and emerging risks .
  • Session 2: AI Pitfalls – Understanding algorithmic bias, data quality challenges, and unintended consequences.
  • Session 3: The Need for Governance – Why AI governance is essential, and how it intersects with Responsible AI and Data Governance.

 

Part 2 — Frameworks & Practices

  • Session 4: AI Governance in Practice – Policies, standards, and risk management frameworks.
  • Session 5: Responsible AI – Embedding ethical principles (fairness, transparency, accountability, inclusivity) into systems and processes.
  • Session 6: Data Governance for AI – Addressing data ethics, data quality, lineage, and security as enablers of trustworthy AI .

 

Part 3 — Connecting the Dots & Implementation

  • Session 7: Practical Integration Framework – A blueprint for combining AI Governance, Responsible AI, and Data Governance.
  • Session 8: From Frameworks to Business Strategy – Scaling governance into enterprise programs, communicating value, and embedding sustainable practices.
Read less

 

Also book one of the practical workshops!
Three top rated international speakers will deliver compelling and very practical post-conference workshops. Conference attendees receive combination discounts so do not hesitate and book quickly because attendance in the workshops is limited.
Payment by credit card is also available. Please mention this in the Comment-field upon registration and find further instructions for credit card payment on our customer service page.

24 March 2026

09:15 - 10:15 | New times: From Data for AI to AI for Data
Room 1    Rutger Rienks
09:15 - 10:15 | Grounded AI in Data Warehousing: How to Make Your LLM Stop Lying
Room 1    Eevamaija Virtanen
09:15 - 10:15 | A Holistic Data Architecture: From Source to Insight
Room 1    Rick van der Lans
09:15 - 10:15 | Beyond Hive: Navigating the Open Table Format Revolution in Modern Data Lakes
Room 1    Jos van Dongen
09:15 - 10:15 | Modeling the Mesh: understanding data products and domains [English spoken]
Room 1    Juha Korpela
09:15 - 10:15 | Data Governance Sprint: Kick-Start Governance in Just Weeks
Room 1    Mathias Vercauteren
09:15 - 10:15 | The Data Administration: essential foundation for effective data management
Room 1    Wouter van Aerle
12:30 - 13:30 | Lunch break
Plenary 
15:45 - 16:45 | Concept Modelling with Normal People – Five Key Lessons From 45 Years of Modelling [English spoken]
Room 1    Alec Sharp
16:50 | Reception
 

Workshops 2026

09:00 - 12:30 | Data Mesh Information Architecture: modeling data products and domains [English spoken]
March 25    Juha Korpela
09:00 - 12:30 | Grounded AI in Data Warehousing: How to Make Your LLM Stop Lying [English spoken]
March 25    Eevamaija Virtanen
13:30 - 17:00 | Concept Modelling for Business Analysts [English spoken]
March 25    Alec Sharp
13:30 - 17:00 | AI Governance, Responsible AI and Data Governance: Connecting the Dots
March 25    Mathias Vercauteren