Conference Outline

The conference starts at 9:30 am and ends at 5:15 pm on both conference days. Registration commences at 8:30 am.


Wednesday, April 6, 2016

9:30 Opening by the conference chairman Rick van der Lans
Session 1
Room 2
Guidelines for developing Logical Data Warehouses
Rick van der Lans
Room 2
Data Wrangling: A new way of Self-Service Exploration, Refining and Structuring of All Data
Bert Oosterhof, Trifacta

Session 2A
Room 1

Operational Analytics in a SQL and NoSQL World
Mike Ferguson

Session 2B
Room 2

Data Visualization – new opportunities, available tools and organizational aspects
Lex Pierik

Session 3A
Room 1

“Little data”, gaining sustainable insight through self-service BI
Jan Henderyckx

Session 3B
Room 2

Complexity and Big Data
Pieter den Hamer
Room 2
Driving Data-Driven Transformations
Michael Oranje, Hortonworks

Session 4
Room 2

Ten Mistakes to Avoid in Big Data Implementation
Krish Krishnan
17:15 Reception


Thursday, April 7, 2016

9:30 Opening by the conference chairman Rick van der Lans

Session 5
Room 2

An Agile Data Strategy for the Modern Enterprise – Regaining Order In a Sea of Data Chaos
Mike Ferguson

Room 2

Rocket Discover: Self Service Data Discovery to empower Business Users
Jamie Devlin, Rocket Software

Session 6A
Room 2

Guidelines for designing and implementing data lakes and data hubs
Krish Krishnan

Session 6B
Room 1

Engage digital business using analytics
Emiel van Bockel

Session 7A
Room 1

Getting ready for the new regulations on Ethical Data Management: Data Privacy through the Information Governance
Daragh O Brien

Session 7B
Room 2

Alternative approaches in designing a Data Vault
Harm van der Lek

Room 2

The five building blocks for the data-driven enterprise
Rob Dielemans, GoDataDriven

Session 8
Room 2

Incorporating Hadoop, Spark, and NoSQL in BI Systems
Rick van der Lans


Daily schedule:

09:30 – 09:45 Opening by the conference chairman
09:45 – 11:00 Session 1
11:00 – 11:15 Coffee break
11:15 – 11:45 Case study
11:45 – 13:00 Session 2A and Session 2B (in parallel)
13:00 – 14:00 Lunch
14:00 – 15:15 Session 3A and Sessie 3B (in parallel)
15:15 – 15:30 Coffee break
15:30 – 16:00 Case study
16:00 – 17:15 Session 4

On the 6th of April, there will be a reception after the final session.

Rick van der Lans1. Guidelines for Developing Logical Data Warehouses (Dutch spoken)
Rick van der Lans, Managing Director, R20/Consultancy

The classic data warehouse architecture has served many organizations well over the last twenty years. But each architecture has its limits. With this well-known architecture it’s not evident to integrate self-service BI products and certainly not if users want to access the source systems.
Delivering 100% up-to-date data to support operational BI is difficult to implement. And how do we embed new storage technologies, such as Hadoop and NoSQL, into the architecture. Clearly, it’s time to migrate gradually to the logical data warehouse in which new data sources can hooked up to the data warehouse more quickly, in which self-service BI can be supported correctly, in which Operational BI is easy to implement, in which the adoption of new technology, such as Hadoop and NoSQL, is simple, and in which the processing of big data is not a technological revolution, but an evolution. This session discusses how organizations can migrate their existing architecture to this new one. Tips and design guidelines are given to help make this migration as efficient and smooth as possible.

  • What are the practical benefits of the logical data warehouse architecture and what are the differences with the classical architecture?
  • How can organizations step-by-step and successfully migrate to this flexible logical data warehouse architecture?
  • How can big data be added transparently to the existing BI environment?
  • How can self-service BI be integrated with the classical forms of BI?
  • How can users be granted access to 100% up-to-date data without disrupting the operational systems?

Mike Ferguson2A. Operational Analytics in a SQL and NoSQL World
Mike Ferguson, Managing Director of Intelligent Business Strategies Ltd.

For many years the requirement for closed loop processing between operational transaction processing systems and analytical systems has been in existence to make use of insights in the context of process tasks being performed. However, the difference today is that this challenge needs to be done at scale. The emergence of mobile commerce, the internet-of-things and on-line gaming has meant that high velocity data ingest and high speed read/write processing is needed. This has resulted in new kinds of operational systems that use NoSQL databases to help them scale. In addition Big Data analytics has also emerged. This session looks at how operational analytics and a closed-loop processing are still possible in a SQL and NoSQL world and how it can be implemented at scale.

  • The Modern Data Driven Enterprise
  • New forces driving the need for highly scalable on-line applications
  • The need to process and analyse transactional and non-transactional data at scale
  • Using NoSQL Data Stores and Big Data Analytics for scalable data processing
  • Key requirements for operational analytics
  • Implementing operational analytics using HBase and Apache Spark in-memory processing
  • Do’s and Don’ts of closed loop processing in a SQL and NoSQL environment


Lex Pierik2B. Data Visualization – new opportunities, available tools and organizational aspects (Dutch spoken)
Lex Pierik, Business Intelligence Consultant, Think.Design.Make.

Visualization nowadays gets a lot of attention in organizations, as it should be. Actionable insights, the communication of conclusions, discovery of data patterns never seen before and much, much more. This is really important for organizations to survive and visualization can help. That’s also the reason why so many data visualization tools and techniques are available right now. It’s also the reason why we see so many new visualization roles and new ways to present the data in a lot of organizations. But to be ready for the future, one needs to know the past and understand the now. In this presentation you will learn about data visualization through the centuries, look in the now and get a preview on the future of data visualization.

  • Data visualization: Just another buzz word?
  • The opinion and views of thought leaders and the truth
  • How to Visualize: market overview with D3.js, Tableau, Qlik, Power BI
  • Organizational aspects – The artist in the data-scientists role
  • Future trends that will rock your world


Jan Henderyckx

3A. “Little data” – gaining sustainable insight through self-service BI (Dutch spoken)
Jan Henderyckx, Managing Director,

Too often we see people embark on a data insight journey without addressing data management fundamentals. Giving your knowledge workers access to data can quickly become a liability rather than a value creator if they are unable to turn the data into actions in a sustainable manner.
Having a data lab up and running does not equal a sustainable integration into the operational and strategic fabric an organisation. Big Data and Analytics are omni-present in many organisations and the mantra of statistical relevance is often used as an excuse for neglecting “little” data. Even if all your big data is accessible in your data lake you still need to understand what the data relates to. Without context you just have a huge pile of data points.
The little data, data that has a life cycle and that describes the related business concepts, will provide the context that allows you to obtain actionable insights. Proper management of the life-cycle of the business entities and their relationships will therefore give a significant boost to your business outcomes. Unfortunately, many master – and reference data management (MDM/RDM) projects are not providing the benefits that are anticipated.
This session will give you practical advice to setup your data management and information governance in such a way that you can get maximum value out of your data without increasing your liabilities.

  • Setting the scene: Defining a sustainable Information centric organisation
  • Drowning in the data lake or having a breach? Information and Data Governance as a safeguard
  • MDM and RDM design principles
    • People, Process and technology
  • Establishing a MDM organisational component
    • How to bring the proper life-cycle management into your organisation?
    • Do you need a MDM COE?
    • Roles and responsibilities for managing the life-cycle
  • Modelling the master and reference data
    • How much common vocabulary is mandatory
    • Dealing with different codesets
    • Handling temporal aspects
  • Subject agnostic Solutions
    • Hierarchy Management
    • Reference and code management
    • Model Driven Architecture
    • Graph-based Solutions


Pieter den Hamer

3B. Complexity and Big Data (Dutch spoken)
Pieter den Hamer, Lead Big Data, Business Intelligence & Analytics, Alliander

What does ‘big data’ have to do with the recent economic crisis, the transition towards sustainable energy and –just to mention something else– the ongoing failures in many IT projects? And why do many organizations still talk about big data & analytics, but are true success stories still uncommon? The answers to these question are hidden in the concept of ‘complexity’. Big data offers many game changing opportunities, but without a ‘big theory’ our efforts remain a scattergun approach. Complexity offers the much needed foundation to give data driven strategies, management policies, innovations and technologies more aim and focus. In complexity science we recognize that organizations are no longer the statically structured, reasonably predictable, top-down controllable entities. Rather, we consider the world as a network of continuously interacting people, organizations and systems: an ever evolving ecosystem in which adaptivity, robustness, emergence and transitions are key characteristics. With big data, we are able to monitor this complex world in ever more detail and in real-time. But on which aspects do we target our big data efforts? What can we still predict, control or influence in a complex world, and which data, which ‘KPIs’ and which information technology do we therefore need?

  • The world as a complex system in examples: from ant hills to business ecosystems
  • Complexity science as a foundation for data driven strategy, management and innovation
  • Big data as practical enabler of complexity science: from abstract models to near real-life simulations & serious gaming
  • 19th century KPIs are dead, long live 21st century bottom-up KIFs (and early warning indicators)!
  • Big data, visual analytics and the prevention of data swamps and infoglut
  • Technical implications: from hulky data warehouses to on-the-fly data integration
  • Cases: smart grids, smart cities, smart societies

Krish Krishnan4. Ten Mistakes to Avoid in Big Data Implementation
Krish Krishnan, Founder and President, Sixth Sense Advisors

The world of technology and infrastructure has evolved quickly between 2000 – 2010 and continues to emerge with more options for solving computing requirements like never before. In the midst of all this evolution arises a chaos, which technology, what solution, what integration, what is right or wrong? This session’s focus is to talk about the mistakes to avoid in Big Data Implementation. We will focus on “Data” and how the transformation from data driven implementations need to avoid the mistakes. The session will focus on
  • Technologies
  • Data Driven Transformation
  • Mistakes to avoid
  • Financial Impact
  • Organizational Impact
  • Risk and Mitigation Strategies

Mike Ferguson5. An Agile Data Strategy for the Modern Enterprise – Regaining Order In a Sea of Data Chaos
Mike Ferguson, Mike Ferguson, Managing Director of Intelligent Business Strategies Ltd.

For most organisations today, their data landscape is becoming increasingly more complex. Transaction systems are now spread across both on premises and in the cloud, multiple data warehouses and data marts often exist and big data platforms have also entered the enterprise. Data quality issues in this kind of landscape can cause significant problems and be hard to eradicate. In addition, new data sources continue to grow and new data collected is often too big to move to process it centrally. So how do you deal with all this to ensuring data remains trusted and to ensure that data governance keeps data under control? This session looks at this problem and shows how to implement an agile data strategy to manage data in a distributed and hybrid computing environment.
  • The Increasing complexity of a distributed data landscape
  • What do you need to consider in a modern data strategy?
  • Managing data in a distributed and hybrid computing environment
  • Multiple tools – self-service DI versus EIM – how they fit together?
  • Dealing with data when it is too big to move
  • The move towards data as a service inside the enterprise
  • The Role of data virtualisation in a modern data strategy

Krish Krishnan6A. Guidelines for designing and implementing data lakes and data hubs
Krish Krishnan, Founder and President, Sixth Sense Advisors

The evolution and acceptance of Hadoop within the Enterprise to create Data Foundations or Data Lake or Data Hub is beyond a trend. The issue that has been created in this new realm is the structure of data, which is multi-structured, multi-formatted and complex file based hierarchies. How do we explore this data? Attend this session to discuss the guidelines for designing and implementing data lakes and data hubs. The best practices and worst mistakes, how to create the guidelines for your organization. What you will learn:
  • Data Lakes and Data Hubs
  • The Swamp Dilemma
  • The Best Practices Approach
  • Enterprise Creation of Guidelines
  • Case Studies

Emiel van Bockel6B. Engage digital business using analytics (Dutch spoken)
Emiel van Bockel, Manager Information Services – Analytics & Bureau ISBN, CB

The world is undergoing rapid change. Companies wanting to thrive in the years ahead will need to rely heavily on business analytics – to provide the required insights. Organizations needs to think about the transition to digital business. Every leading company within the digital industry will have analytics incorporated in their services. No matter who your customers are, they are now your decision makers, and keeping them will involve the efficient use of analytics. The key success factors for internal and external analytics are mobile usage, great visualisations and high performance. Emiel van Bockel will share his vision and global insights on how to successfully implement the business analytics you need, against the context of the disruptive books market and how BI technology became part of it.
  • Incorporating Digital in your Business Strategy
  • The role of Analytics in maintaining the service level for your customers
  • Which role will In-Memory technology and Big Data technology play?
  • Key factors for future success: mobile, data visualization and performance
  • Best practices and lessons learned in the book publishing industry

Daragh O Brien7A. Getting ready for the new regulations on Ethical Data Management: Data Privacy through the Information Governance
Daragh O Brien, Managing Director, Castlebridge Associates

By now, Europe should have an agreed text for the General Data Protection Regulation and we are three months into the transition phase to enforcement. We have also seen the European Data Protection Supervisor publish an Opinion on Ethics in Information Management, which has been welcomed by a number of EU Data Protection Authorities, and which places a focus on the importance of the most fundamental of Human Rights in the EU – the preservation of Human Dignity. This session explores how to current models for Data Management can be applied to help organisations meet or exceed their requirements under the GDPR, given the explicit focus on Data Governance in the Regulation. The session will also look at how Ethics and ethical concepts can be implemented as part of the Information Governance structures and culture of your organisation. This session will draw examples from both the management and governance of personal data and other high profile examples of mishandling of other kinds of data to demonstrate key points.
Delegates will:
  • Get an up-to-date overview of the structure, obligations, duties, and penalties under the EU General Data Protection Regulation
  • Get clear insights into how a holistic Information Governance strategy can support compliance with the GDPR, and why silod approaches won’t work.
  • Develop a clear understanding of how ethics-based approaches to Information Management can be developed, and why they will be increasingly important
  • Go back to the office with a few key pictures they need to draw to help get their colleagues aligned for the future of Ethics and Data Privacy

Harm van der Lek

7B. Alternative approaches in designing a Data Vault (Dutch spoken)
Dr. Harm van der Lek, VanderLek Advies

Data Vault is a popular method in the Netherlands for the design of an Enterprise Data Warehouse. Which approach do you take? You can lock yourself in a room and brainstorm together about the hubs, links and satellites. Or you can deduce the DV model from an Information Model, but what is exactly the latter? Maybe it is possible to use a DW-automation tool. We will consider the precise structure of- and the standard(-s) with respect to a Data Vault model. Hereby we will encounter the classic debate about ‘end-dating links’. As a consequence we will be able to look at the foundation from a different angle.

  • Kun je een ‘Data Vault’ uit de losse pols ontwerpen?
  • Can you design a Data Vault off-the-cuff?
  • Generating a Data Vault with tools such as Qosqo, WhereScape, Kalido, Attunity Compose
  • Is it allowed to store history in a Link table?
  • Looking at the foundation from a different angle
  • How flexible is a Data Vault?
  • What to do with composite ‘business keys’?

Rick van der Lans8. Incorporating Hadoop, Spark, and NoSQL in BI Systems (Dutch spoken)
Rick van der Lans, Managing Director, R20/Consultancy

Most current BI systems are developed with classic database servers, ETL tools, and reporting tools. But with the advent of Big Data, are the architectures of these BI systems still sufficient? Are they flexible enough? Can they handle such massive quantities of data? Or, is it time to adopt all these new data processing technologies, such as Hadoop, Spark, MapReduce, NoSQL, and SQL-on-Hadoop?
This session discusses how these new technologies can be deployed to develop modern BI systems. BI systems that allow organizations to analyze Big Data with the simplest self-service tool up to the most advanced analytical tool, that make integration of Enterprise Data and Big Data transparent, and that allow an evolutionary adoption of all the new technologies.

  • Critical assessment of new big data technologies, including HDFS, Spark, MapReduce, Storm, and SQL-on-Hadoop
  • Application areas of Hadoop in BI systems: data scientist sandbox, offloading cold data, staging area, and ETL engine
  • Letting classic reporting and analytical tools access Big Data stored in Hadoop
  • Transparently offloading Data Warehouse data to Hadoop
  • Moving data quality aspects to the business users by using self-service data preparation
  • Do we need physical or virtual data lakes?



Data Wrangling: A new way of Self-Service Exploration, Refining and Structuring of All Data
Bert Oosterhof, EMEA Field CTO at Trifacta

In the past Data Integration was solved by programming (Input-Process-Output), followed by an era of ETL (extract-transform-load). Nowadays companies look for best-of-breed technologies in the complete data supply-chain. Data Wrangling is an important component in that solution stack: giving the business users (Data Analysts, Data Scientists, a.o.)  the means to easily explore, transform and enrich raw, complex data into clean and structured formats for further analysis. Trifacta combined the latest research in human-computer interaction, scalable data management and machine learning into a unique solution. This session explains the why, what and how of Data Wrangling.

Driving Data-Driven Transformations
Michael Oranje, Territory Manager Benelux at Hortonworks

Topics that will be covered during this session are:

  • The journey to the Data Lake
  • Why Hadoop?
  • Why Hortonworks?

Rocket Discover: Self Service Data Discovery to empower Business Users
Jamie Devlin, Sales Engineer at Rocket Software

Rocket Discover and self-service BI, an exploration into the changing landscape of Business Intelligence, with a focus on business user orientated data preparation and visualisation.  A practical demonstration to guide you through the intuitive nature and ease of use of Rocket Discover whilst outlining real benefit to your business.

The five building blocks for the data-driven enterprise (Dutch spoken)
Rob Dielemans, Co-Founder & Managing Director of GoDataDriven

Data, the driving force of the fourth industrial revolution. Leading organizations have embraced the transformation to a data driven approach. Organizations that demonstrate exponential growth most often are the ones that are able to take their business decisions based on real-time data and predictions. Of course, data and technology are essential, but they only add value if they are aligned with a coupe of other elements.

In this presentation Rob Dielemans will show how category leaders became data driven by re-organizing around the five building blocks of the data driven enterprise.


  • The fourth industrial revolution
  • Transforming into a technology company
  • The modern data landscape
  • Why data and technology are not your main challenge
  • Continuous innovation: The five building blocks of the data driven enterprise