Schedule

The data mesh: a distributed data architecture [Dutch spoken]

Centralized and monolithic data architectures have a number of problems according to Rick van der Lans and he proposes the datamesh as a data architecture solution. Contrary to datawarehouses, datalakes and datahubs that are centralistic and monolithic, the datamesh is a distributed solution. Classical responsibilities within an IT organisation will shift dramatically.
Read more

Initiatives such as digital transformation and becoming a data-driven organization are increasing the importance of data within organizations. Organizations want to do more with data. Their existing IT landscape is often inadequate, so something needs to change. Many are looking for solutions based on data lakes, data hubs and data factories, but a data architecture that is also well-worth considering is the data mesh. While data warehouses, data lakes and data hubs are primarily centralistic and monolithic solutions, the data mesh is a distributed solution. The data architecture is not broken down based on the nature of the application, but based on business domains. The division is no longer transactional systems versus analytic systems. As a result, traditional responsibilities within an IT organization will shift dramatically. For example, single-domain engineers responsible for transactional systems will also become responsible for the interfaces that provide analytical capabilities to the organization.

  • The practical problems of centralistic and monolithic data architectures
  • Differences between data mesh and data fabric
  • From service interface to data product
  • The importance of a foundation, or in other words ‘data infrastructure as a platform’
  • The difference between a single-domain and hyper-domain data mesh
  • The roles of data warehouses, data lakes and data hubs in a data mesh.
Read less

Fast Data - concepts, architecture and technology of streaming analytics [Dutch spoken]

Streaming Analytics enable the execution of predictive models that operate on the never-ending data streams. Bas Geerdink discusses the three main steps of a streaming analytics solution and the core of the required architecture. With various use cases he will show the challenges and the technological options for implementation of the architecture.
Read more

Streaming Analytics (or Fast Data processing) is becoming an increasingly popular subject in financial services, marketing, the internet of things and healthcare. Organizations want to respond in real-time to events such as clickstreams, transactions, logs and sensory data. A typical streaming analytics solution follows a ‘pipes and filters’ pattern that consists of three main steps: detecting patterns on raw event data (Complex Event Processing), evaluating the outcomes with the aid of business rules and machine learning algorithms, and deciding on the next action. At the core of this architecture is the execution of predictive models that operate on enormous amounts of never-ending data streams.

But with opportunities comes complexity. When you’re switching from batch to streaming, suddenly time-related aspects of the data become important. Do you want to preserve the order of events, and have a guarantee that each event is only processed once? In this talk, I will present an architecture for streaming analytics solutions that covers many use cases, such as actionable insights in retail, fraud detection in finance, log parsing, traffic analysis, factory data, the IoT, and others. I will go through a few architecture challenges that will arise when dealing with streaming data, such as latency issues, event time versus server time, and exactly-once processing. Finally, I will discuss some technology options as possible implementations of the architecture.

Read less

Agile model-driven data warehouse development: A client success story with Datavault Builder

First-hand insights in Datavault Builder Success Story of a client in the Dutch floral industry:
Read more

• Their journey to a better, faster, more structured and scalable data and information management environment using Datavault Builder
• How the model-driven development platform of Datavault Builder led to outstanding time-to-market results
• How the client increased transparency with Data Lineage and deployment module enabled a flawless deployment pipeline

Read less

Experiences with embedded BI in customer-facing applications [Dutch spoken]

The development of Embeddded BI applications requires a special development approach and the use of diverse technologies. Lead Software Developer & Architect Marc de Haas presents the Five levels of Embedded BI, the various architecture options and will discuss implementation issues of Embedded BI.
Read more

Virtually all organizations have experience in developing traditional BI applications, such as dashboards and reports for employees. However, the development of Embeddded BI applications that are used by customers and suppliers as part of online applications is still unknown territory. Customer-facing BI applications can be used, for example, to speed up time to market, increase customer satisfaction and achieve greater reach.

These types of applications require a different development approach and the use of different technologies. In this session, the various building blocks are discussed such as web embedding, secure custom portal, SAAS/COTS embedding, embedding of real-time and interactive decision points and action-oriented dashboards. The importance of scalable cloud-based database servers such as Google BigQuery, Amazon RedShift, Snowflake and Starburst will also be discussed.

Topics:

  • Five levels of Embedded BI
  • Architecture options: serverless or not, real-time and batch, lambda and kappa
  • Democratization of insights through customer-facing BI-solutions
  • Success stories of Embedded BI
  • Four important blocks: the database and infrastructure, the analytics platform, software development resources and the data product owner.
Read less

Continuous value creation from an incremental DWH architecture [Dutch spoken]

Within a large pension insurance company, sustainability was one of the design principles when setting up a data environment. The data warehouse architecture was constantly adapted in an iterative manner. In this session, Mark van der Veen discusses how the data foundation was adapted and expanded for multiple purposes.
Read more

Attempts to set up a data warehouse within the pension insurance company have had varying degrees of success. A large-scale quality survey of relevant administrations in 2009 created a new urgency to work with data, with data integration taking a central stage.

Since, from a business point of view, such an investment must have a long lifespan – there were no (large-scale) cloud solutions yet – sustainability was one of the design principles. Besides this principle, flexibility, reliability, availability and repeatability were also important design principles. The design was created by the team that had to realise the environment. In a period of six weeks, a prototype was built using various methods and techniques. This resulted in a ‘grand high level design’ for the data model and the technical solution for the environment, in which an iterative development strategy was chosen.

After the realisation of the quality survey and the associated in control statement, the environment was further expanded. This was important for execution of portfolio analyses, in-depth quality analyses, operational control and foundation for the migration process & datapipeline to select and (commercially) migrate customers to the new product propositions. In 2018, the same data environment was further expanded for the analysis and implementation of new legislation. Now this environment is being used for data science activities. Thus, this environment has celebrated its ten-year anniversary and has been able to provide many strategic, tactical and operational goals with data needed to achieve desired results.

  • Setting up a data foundation for different data goals.
  • Grand Design with iterative implementation of the data environment.
  • Data warehousing as basis for structured analyses and data deliveries.
  • Business Intelligence application for (dynamic) reports for decision-making during processes.
  • Agile approach in team composition (analysts, (data) engineers, functional administrators).
  • Continuous value creation by developing on existing data environment.

During the session, Mark van der Veen will share his experiences on how to get value from the initial set-up of the data environment.

Read less

Embedding Data Science in your Data Platform [Dutch spoken]

The migration of data science models and related activities to a controlled data and application landscape is a tough challenge according to Data Architect Hans Pannekoek. He takes you on a journey from ambition to operationalization, showing important choices and how to migrate.
Read more

In the past few years many organizations invested in experimenting with Data Science, predictive models and Analytics. Often we see these models used as point in time solutions in the business, with little attention for support and a lot of manual work to be done. The next challenge is to move from experimenting to operationalization. How to move the models and related data science activities to a governed IT data and application landscape? We do this to get an even more widely distributed and, more important, operationalized data enviroment within the organization. In addition, this will help to address the ambitions to become even more Data Driven.
In this session we will show you the journey from ambition to operationalization, the Architecture, a few important desicions to make and the way to migrate to such an environment. We will address the unique challenges, a lot of hands on and our lessons learned.

  • Data Platform: in the cloud or not?
  • Not only focus on Technology, but also on Organization
  • Migrate & Integrate data on a total scale
  • Focus on business value
  • Share our lessons learned
  • Become Data Driven.
Read less

Guidelines for Migrating Your Data Warehouse to the Cloud

Migrating a data warehouse to the cloud can be a daunting task because of its lifespan. De-risk a data warehouse migration project before you migrate anything is important according to Mike Ferguson. He looks at what is involved in migrating a data warehouse to a cloud environment and how a migration can cause changes to the data architecture.
Read more

Many companies today are looking to migrate their existing data warehouse to the cloud as part of an a data warehouse modernisation programme. There are many reasons for doing this including the fact that many transactional data sources have now moved to the cloud or the capacity of an on-premises data warehouse has been reached with another project looming. Whatever the reason, data warehouse migration can be a daunting task because these systems are often five or ten years old. A lot may have happened in that timeframe and so there is a lot to think about. There are also a lot of temptations and decisions you can make that can increase risk of failure. This session looks at the what is involved in migrating data warehouses to the cloud environment, what options you have and how a migration can cause changes to data architecture.

  • Why migrate your data warehouse to the cloud?
  • The attraction of cloud based analytical DBMSs and data platforms
  • Should you just migrate, re-design, switch DBMSs or do all of this?
  • What are the options and their pros and cons when migrating a data warehouse to the cloud?
  • How can you de-risk a data warehouse migration project before you migrate anything?
  • Steps involved in migrating an existing data warehouse to the cloud
  • Dealing with cloud DW migration issues such SQL differences
  • How will migration affect data staging, ETL processing and data architecture?
  • Integrating data science into a cloud-based data warehouse
    • Training and deploying machine learning models into your cloud analytical database
  • Integrating cloud-based data warehouses, data science and streaming with your BI tools.
Read less

Would you let AI do your BI?

In BI and analytics vendors are adding artificial intelligence possibilities with the promise of better or faster decisions. But how far could AI go? Barry Devlin explores in this session the challenges and potential benefits of moving from BI to AI. Also he will discuss the dangers of thoughtless automation and ethical considerations in adopting AI.
Read more

AI is everywhere. Its early invasion of everyday life – from dating to policing – has succeeded beyond its proponents’ wildest dreams. Analytics and machine learning built on “big data” feature daily in the mainstream media.

In IT, BI and analytics vendors are adding artificial intelligence to enhance their offerings and tempt managers with the promise of better or faster decisions. So, how far could AI go? Will it take on a significant proportion of decision making across the entire enterprise, from operational actions to strategic management? What would be the consequences if it did?

In this session, Dr. Barry Devlin explores the challenges and potential benefits of moving from BI to AI. We explore its use in data management; its relationship to data warehouses, marts, and lakes; its emerging role in BI; its strengths and weaknesses at all levels of decision-making support; and the opportunities and threats inherent in its two main modes of deployment: automation and augmentation.

What you will learn:

  • Where and why AI has been incorporated into today’s BI and analytics products
  • How data preparation and governance benefit from AI
  • What AI offers to existing data warehouses and lakes
  • The important difference between automation and augmentation
  • The dangers of thoughtless automation and benefits of well-considered augmentation
  • Ethical considerations in adopting AI in enterprise decision making.
Read less

Overview of the Open Energy Data Platform (OSDU) [Dutch spoken]

This session will present you the processing of data from oil and gas exploration, development, and from new energy sources. Johan Krebbers from Shell International will discuss the advantages of the metadata-driven OSDU Data Platform, like easy access for AI based applications and datatype optimized APIs.
Read more

In September 2018, the development of the OSDU Data Platform was started by the Open Group. The OSDU Forum started as a standard data platform for the oil and gas industry, which will reduce silos and put data at the center of the subsurface community. All types of data (structured and unstructured) from oil & gas exploration, development, and wells is loaded into this single OSDU Data Platform. The data is accessible via one set of APIs; some datatype optimized APIs will be added later. The platform enables secure, reliable, global, and performant access to all subsurface and wells data. It acts as an open, standards-based ecosystem that drives innovation.

On March 24, 2021 the first operational release was launched on the public cloud platforms from Amazon, Google, and Microsoft. Later in 2021, oil & gas production data and data from new energy sources, such as wind and solar farms, hydrogen, geothermal, and CCUS, will be added to this single, open source-based Energy data platform. The OSDU Data Platform acts as a system of record and acts therefore the master of that data. This session discusses the challenges  involved in setting up such a challenging project and platform and the lessons learned along the way.

  • Bringing all energy sources related data together in a single OSDU Data Platform results in easy access for AI based applications.
  • The advantages of an open source- and real-time-based data platform.
  • What is the value of system of record?
  • One set of APIs, and datatype optimized APIs, to access all data types; applications to be installed anywhere.
  • The OSDU Data Platform is metadata-driven.
Read less
Jan Doumen

Allianz Benelux Industrial Platform Industrialisation Program: BI on Hybrid Cloud

In this session we learn how and why Allianz Benelux moved from their bespoke Data Warehouse Automation tool to an enterprise-grade solution, and the improvements this enabled them to make to their data infrastructure.
Read more

Topics covered include:
1. Automating an Oracle to Snowflake migration project.
2. Managing a Data Vault architecture that is growing in size and complexity.
3. How WhereScape Data Warehouse Automation performs in comparison to Allianz’s homegrown solution.

Read less

Data Strategy according to DMBoK [Dutch spoken]

Autonomous systems are increasingly deployed for problem solving but are strongly dependent on the data in your organization. Therefore a data strategy is needed to enable your organization to make fact based decisions. Peter Vieveen will help defining a data strategy using the Data Management Body of Knowledge, and explain how to use gamification and data literacy to explain the data strategy to your organization.
Read more

In an increasingly complex and interconnected world there is an increased need for autonomous systems that are beyond the capabilities of human operators. Swarm Intelligence systems rely on emergent intelligence for their problem solving issues. Decisions following out of these intelligent systems are dependent on the data in your organization. Implementing data quality leads to better data. But do you know whether the data is fit for purpose? Is the data used in the appropriate context within your BI systems?

A data strategy is needed to enable your organization to make fact based decisions through data literate employees supported by intelligent systems. Gamifaction and Data Literacy are meant to explain your data strategy. Peter Vieveen will guide you through the process of defining such a data strategy using the Data Management Body of Knowledge and explain how to use gamification and data literacy to explain the data strategy to your organization.

  • A concrete approach to defining a data strategy
  • The importance of gamification in a data strategy
  • The four pillars of data wisdom in gamification
  • Successfully implementing the data strategy with the knowledge areas of DMBoK
  • Experiences with the game Data Moles for creating a data strategy.
Read less

Physical data modelling in a 'modern data warehouse' based on Snowflake or BigQuery [Dutch spoken]

When migrating on-premise data warehouses to the public cloud, new technological possibilities allow new approaches. Data-solution architect Rogier Werschkull will give insights into the (im)possibilities in this area, looking at how this can be practically tackled within solutions like Snowflake or Google BigQuery.
Read more

Around 2015, companies in the Netherlands started migrating their on-premise data warehouses to the public cloud. When doing this, it is important to realise that it is not always logical for the deployed physical data models to remain the same as they were in on-premise systems. The new technological possibilities not only allow new approaches, but can also cause anti-patterns within existing physical modelling techniques such as Dimensional modelling (Kimball) or Data Vault. Or you just need a slightly different approach to implement these techniques. The goal of this session is to give insight into the (im)possibilities in this area, looking at how this can be practically tackled within solutions like Snowflake or Google BigQuery.

Examples of physical data modelling topics we will cover:

  • The use of the semi-structured data type VARIANT in Snowflake: the enabler for a true ‘separation of concerns’ between efficient initial data storage and the schema-on-read phase or its usage in Data Vault satellites
  • The re-introduction of dimensional / denormalised structures in the integration layer if one also implements a historical staging layer
  • The (im)possibilities of partitioning / clustering in BigQuery / Snowflake and why doing this properly is essential for scalable performance and controlling costs
  • Whether to use concatenated business keys, hash keys or (integer) surrogate keys.

Session highlights

  • What does a ‘modern way of data warehousing’ look like and how does it differ from the classical approach?
  • What is the functional role of each of the four layers of this modern data warehouse?
  • What are the main advantages/disadvantages of the most widely used cloud analytical databases: Snowflake, Google BigQuery, Amazon Redshift and Azure Synapse
  • What are appropriate physical data modelling techniques to deploy per data warehouse layer (with a focus on Snowflake and BigQuery) and why?
  • Lessons learned in Snowflake and BigQuery: what works and what doesn’t when physically implementing these data models?
Read less

The data kitchen of the RIVM: from testlocation to Covid-19 dashboard [Dutch spoken]

Martijn van Rooijen and Jeroen Alblas give a peek into the data kitchen of the RIVM. After the introduction of the SARS-CoV-2 virus in The Netherlands, the healthcare chain faced major challenges. For the development of the Covid-19 dashboard, it was important to extract data streams with high-quality data from the care chain. Data virtualisation played an important role in this.
Read more

On 27 February 2020, the SARS-CoV-2 virus was detected for the first time in a patient in the Netherlands. The importance of high-quality data from the entire care chain in fighting the pandemic quickly became clear. Every organization in the Dutch healthcare chain is involved: GGD, VWS, RIVM, laboratories, hospitals, care institutions, GPs, patient federations, ICT suppliers, and so on.

This presentation provides a glimpse into the RIVM’s data kitchen. What were the challenges in collecting all the ingredients, seasoning them and serving them on the (dash)plate? An important piece of kitchen equipment was the pressure cooker. This session will focus on the experiences gained during the development of the Corona dashboard and required systems under high pressure, with the whole population of the Netherlands watching.

  • Overview of the healthcare chain and associated data flows.
  • Tools and techniques for harmonisation and generation of output at RIVM.
  • The role of open data and FAIR principles.
  • Initial experiences with data virtualisation.
  • Overview of data for the Covid-19 dashboard.
Read less

Guidelines for designing sustainable data architectures [Dutch spoken]

A sustainable data architecture is an architecture that is easy to adapt and expand. Rick van der Lans looks at the role of a data lake, data hub or data warehouse in a sustainable data architecture. Requirements and design rules will also be considerated.
Read more

Sustainable data architectures are needed to cope with the changing role of data within organizations and to take advantage of new technologies and insights. A sustainable data architecture is not a data architecture that only supports the current and upcoming requirements, but one that can survive for a long time because it is easy to adapt and expand. As requirements for data usage change, a sustainable data architecture should be able adapt without the need for major redevelopment and rebuilding exercises.

No magical products exist for developing sustainable architectures. Several product types are required to achieve this. Other design principles will also have to be applied and certain firm beliefs will have to be sacrificed. This session examines the requirements for sustainable data architectures and how these can be designed and developed.

  • The added value of data architecture automation tools
  • Seven requirements for sustainable data architectures: definition independent, technology independent, runtime platform independent, distribution independent, architecture independent, design principle independent and meta data independent
  • Design rules for sustainable transactional systems
  • Does IT development lend itself to automation?
  • A metadata architecture as part of a sustainable data architecture
  • What is the role of a data lake, data hub or data warehouse in a sustainable data architecture?
Read less

How to Revamp your BI and Analytics for AI-based Solutions

In this half day virtual session, Dr. Barry Devlin explores the challenges and potential benefits of moving from BI and Analytics to AI. We explore its use in data management; its relationship to data warehouses, marts, and lakes; its emerging role in BI; its strengths and weaknesses at all levels of decision-making support; and the opportunities and threats inherent in its two main modes of deployment: automation and augmentation.
Read more

As the pandemic has proven, digital transformation is possible—and at speed. Many more aspects of business operations have moved online or have enabled remote or no-touch access. This evolution has generated another growth spurt of “big data”, from websites, social media, and the Internet of Things (IoT). With new customer behaviour likely to stick after the pandemic and working from home remaining an important factor, novel approaches to decision-making support are an increasingly important consideration for many organisations.

In this context, the recent growth in interest in and focus on the use of artificial intelligence (AI) and machine learning (ML) across all aspects of business in every industry and government raises important questions. How can AI/ML be applied at management levels in support of decision making? What new possibilities or problems does it present? How far and how fast can businesses move to benefit? What are the downsides?

The seminar

AI, combined with big data, IoT and automation, offer both the threat and the promise of revolutionising all aspects of IT, business and, indeed, society. In this half-day session, Dr Barry Devlin explores what will enable you to take full advantage of emerging AI technology in your decision-making environment. Starting from the familiar worlds of BI and analytics, we position traditional and emerging BI and analytics tools and techniques in the practical application of AI in the business world. Extrapolating from the rapid growth of AI and IoT in the consumer world, we see where and how it will drive business decision making and likely impact IT. Based on new models of decision making at the organisational and personal levels, we examine where to apply augmentation and automation in the roll-out of AI. Finally, we address the ethical, economic and social implications of widespread adoption of artificial intelligence.

Learning objectives

  • A comprehensive architectural framework for decision-making support that spans from BI to AI
  • A brief primer on the evolution, key concepts, and terminology of AI
  • Understanding the relationship between “big data” / IoT / social media and AI /ML and how it drives business value
  • Approaches to applying AI to decision making
  • Augmentation vs. automation of decision making
  • How AI, social media, and IoT impact the IT department
  • New technology solutions for business applications using AI and IoT, including embedded BI and edge analytics / social media
  • How to evolve today’s BI to future AI-based solutions
  • Ethical, economic, and social considerations for using AI to support decision making.

Intended for you

This seminar is of interest to all IT professionals and tech-savvy businesspeople directly or indirectly involved the design, delivery, and innovative use of decision making support systems, including:

  • Enterprise, systems, solutions and data architects in data warehouse, data lakes, BI and “big data”
  • Systems, strategy and business intelligence managers
  • Data warehouse, lake and decision support systems designers and developers
  • Tech-savvy business analysts and data scientists.

 

Course Description

We will send the course materials and meeting instructions well in advance as well as the invitation with hyperlink to join us online. The seminar will start at 09:00 and lasts until 13:00. The online meeting will be available at least one half hour earlier so please log in timely in order to check your sound and video settings beforehand.

 

  1. Architectural Framework and Models for Decision-Making Support
    • Conceptual and logical architecture for information use in decision making
    • How businesspeople really make decisions and take actions
    • Considerations beyond rational choice theory and cognitive biases
    • Organisational models for decision making / action taking
    • Architectural considerations—from traditional BI to operational analytics

 

  1. Applying AI to Decision Making: Top-Level Considerations
    • A brief primer on AI terminology, techniques such as artificial neural networks, and emerging approaches
    • From training to operational use—data and technology options
    • Automation vs. augmentation—the key choice in applying AI
    • AI considerations for operational, tactical and strategic decision-making
    • Positioning AI in relation to Data Warehouses, Lakes, and other constructs

 

  1. Applying AI to Decision Making: The Devil in the Detail
    • AI in information preparation and governance
    • AI in BI and analytics tools
    • Model management
    • Centralisation vs distributed processing approaches
    • Migrating from BI to AI—key steps and options

 

  1. Building the Future of Decision Making with AI—Key Considerations
    • Ethical considerations for analytics and AI in business
    • Specific ethical concerns for AI-driven decision making
    • The dangers of surveillance capitalism
    • Wider ethical concerns for society
    • Potential and possible impacts of AI on the economy and employment.
Read less

 

Limited time? Join one day & conference recordings
Can you only attend one day? It is possible to attend only the first or only the second conference day and of course the full conference. The presentations by our speakers have been selected in such a way that they can stand on their own. This enables you to attend the second conference day even if you did not attend the first (or the other way around). Delegates also gain four months access to the conference recordings so there’s no need to miss out on any session.

30 June

09:00 - 09:15 | Opening by conference chairman
Plenary    Rick van der Lans
09:15 - 10:05 | The data mesh: a distributed data architecture [Dutch spoken]
Plenary    Rick van der Lans
10:10 – 11:00 | Fast Data – concepts, architecture and technology of streaming analytics [Dutch spoken]
Plenary    Bas Geerdink
11:20 – 11:50 | Agile model-driven data warehouse development: A client success story with Datavault Builder
Plenary    Ron van Braam, Guido de Vries
11:55 – 12:45 | Experiences with embedded BI in customer-facing applications [Dutch spoken]
Plenary    Marc de Haas
12:30 - 13:30 | Lunch break
Plenary 
13:30 – 14:20 | Continuous value creation from an incremental DWH architecture [Dutch spoken]
Plenary    Mark van der Veen
14:35 – 15:25 | Embedding Data Science in your Data Platform [Dutch spoken]
Plenary    Hans Pannekoek, Gertjan van het Hof
15:30 – 16:20 | Guidelines for Migrating Your Data Warehouse to the Cloud
Plenary    Mike Ferguson

1 July

09:00 - 09:15 | Opening by conference chairman
Plenary    Rick van der Lans
09:15 - 10:05 | Would you let AI do your BI?
Plenary    Barry Devlin
10:10 – 11:00 | Overview of the Open Energy Data Platform (OSDU) [Dutch spoken]
Plenary    Johan Krebbers
11:20 – 11:50 | Allianz Benelux Industrial Platform Industrialisation Program: BI on Hybrid Cloud
    Jan Doumen
11:55 – 12:45 | Data Strategy according to DMBoK [Dutch spoken]
Plenary    Peter Vieveen
12:30 - 13:30 | Lunch break
Plenary 
13:30 – 14:20 | Physical data modelling in a ‘modern data warehouse’ based on Snowflake or BigQuery [Dutch spoken]
Plenary    Rogier Werschkull
14:35 – 15:25 | The data kitchen of the RIVM: from testlocation to Covid-19 dashboard [Dutch spoken]
Plenary    Jeroen Alblas, Martijn van Rooijen
15:30 – 16:20 | Guidelines for designing sustainable data architectures [Dutch spoken]
Plenary    Rick van der Lans

Workshop 2 July

09:00 - 13:00 | How to Revamp your BI and Analytics for AI-based Solutions
2 July    Barry Devlin