U KUNT DAG-1 OF DAG-2 VOLGEN OF BEIDE! MET WORKSHOPS NAAR KEUZE.
Uw dagvoorzitter
"Connecting the Dots" - Data-gedreven Business Waarde creëren
Het gaat er niet langer om het management te overtuigen van de waarde van het gebruik van data om bedrijfswaarde te creëren. De echte vraag is nu verschoven naar het zorgen dat de waarde op een duurzame manier wordt geleverd. Te veel organisaties slagen er nog steeds niet in om daadwerkelijk waarde te halen uit hun data-initiatieven. Wat zijn de belangrijkste elementen die moeten worden ingevoerd om succes te garanderen? Hoe gaat u van een technologie-centrische naar een geïntegreerde datastrategie? Hoe verbeteren we de datageletterdheid van de stakeholders en zorgen we ervoor dat de dataproducten effectief gebruikt kunnen worden? Met de regelmaat van de klok introduceren we nieuwe concepten zoals data fabric en data mesh waarbij de vraag blijft in welke mate die nieuwe oplossingen brengen of nieuwe problemen introduceren.
U leert:
- Welke aspecten echt relevant zijn om waarde te halen uit uw data
- Hoe kunt u de “data literacy” van medewerkers verhogen
- Hoe accuratere data kunnen bijdragen tot betere algoritmes
- Waarom het belangrijk is om niet enkel naar interne maar ook naar externe data te kijken
- Welke technologische oplossingen nodig zijn om een goed datamanagement op te zetten
- Welk data governance model tot de beste resultaten kan leiden.
De overgang naar het nieuwe pensioencontract - Datamanagement-aanpak en tooling
Met de komst van het nieuwe pensioencontract in Nederland staat APG voor een uitdaging: Hoe worden de pensioenrechten van miljoenen deelnemers omgezet naar de nieuwe pensioenregeling? Datamanagement speelt een belangrijke rol in deze transitie. Arjen Bouman geeft een blik achter de schermen bij deze enorme operatie en deelt zijn ervaringen en learnings die hij tijdens dit traject heeft opgedaan.
Lees minderEnterprise Semantic Data Management
De betekenis van data krijgt steeds meer aandacht. Data lineage: de herleidbaarheid van data naar zijn betekenis en de reden waarvoor de data wordt gebruikt, is steeds vaker een kritieke succesfactor. Daarnaast leidt de steeds grotere verscheidenheid van data tot de noodzaak om grip te krijgen op de afzonderlijke databronnen. Schaarste in dataspecialisten leidt tot de noodzakelijk om slimmer met data om te gaan, en beschikbare kennis te expliciteren. De invoering van een gedistribueerde data-architectuur geeft het laatste zetje om het “informatiehuis op orde” te brengen.
De verwerking van data is daarmee niet alleen een logistieke uitdaging, maar vereist ook een betrouwbare aanpak om de betekenis van data in kaart te brengen die verder gaat dan de traditionele beschrijving van de structuur van het datawarehouse: een semantische aanpak is vereist.
Deze semantische aanpak neemt de problem space als uitgangspunt voor de beschrijving: het domein waarover de data gaat. Vanuit een nauwkeurige analyse en model van het domein, volgt een herleidbare vertaling en relatie naar het model voor de data zelf in de solution space. Het resultaat kan gezien worden als een knowledge graph: een netwerk van verbonden (linked) data, inclusief de definitie van deze data en de lineage naar de grondslag voor deze data in wet- regelgeving, compliancy richtlijnen en bedrijfsdefinities.
Een dergelijke aanpak is niet alleen relevant voor het datawarehouse: het resultaat is een expliciete, eenduidige vastlegging van de kennis over de relevante data in een organisatie. Marco Brattinga neemt u mee in de wereld van enterprise semantic data management langs de volgende topics:
- De relevantie van semantiek voor het data warehouse
- De knowledge graph: verbinden van data door en met metadata
- De problem space versus de solution space
- Semantisch modelleren en data lineage
- Het belang van een augmented data catalog
- Praktische handvaten om data lineage te implementeren.
Lunchpauze
Responsible Data Science
Als data scientist groeit onze impact op de wereld om ons heen elke dag aanzienlijk. Maar wat zijn de concrete stappen die we kunnen zetten om een verantwoordelijke data scientist te worden? In deze sessie maakt u kennis met verantwoordelijke data science en hoe ethiek kan worden geïntegreerd in technologie. Tanja Ubert en Gabriella Obispa tonen hun visie op hoe we ‘responsibility’ moeten incorporeren in ons werk met data. Welke vragen moeten we stellen, welke verantwoordelijkheid hebben we, als specialisten, te nemen wanneer we verzamelen, gebruiken en implementeren voor data oplossingen in onze wereld.
- Waarom responsible data science?
- Een praktisch startpunt om ethiek te incorporeren in uw data team en organisatie
- Value sensitive design – waarden in uw technologische innovaties
- Workshop self-assessment current projects, the data ethics framework: hoe start je met het implementeren van transparency, accountability and fairness: ontwerp de eerste versie van een (her)gedefinieerde richtlijn zodat we werken naar responsible data science
- Ethische dialoog: delen van inzichten in groepen, de start van een community.
Simplified Data Architecture: Data Warehouse Automation met Datavault Builder
Data architectures are becoming increasingly complex due to the need to serve many purposes: multiple personas, ranging from operational data users to data scientists need to have access to a variety of managed, governed data and demand real-time, self-service reporting and analytics. Applying principles while designing data architectures will help simplify the development and usage of those architectures by developers and end users. We apply the following principles:
- Data and the meaning of data are managed separately;
- Storage and Compute are implemented and managed separately;
- Requirements determine the needed capabilities and technological solutions, in that order;
- Automate data engineering as much as possible.
In this session we will show how Connected Data Group and 2150 Datavault Builder work together in designing the simplified architecture by focusing on data modelling with Data Vault and automating the data engineering process with Datavault Builder.
During this session you will learn:
- Principles of the simplified architecture
- Simplify complexity
- Separating data from metadata
- Eliminating data replication
- Automating data engineering
- Data Vault Builder
- Relevance
- Overview of the development process
- Layers
- USP’s
- 5 minute demo
- Q&A
How to use the full benefits of Data Vault? Data Vault is the modeling approach to become agile in Data Warehousing. The Data Vault approach is unbeatable, especially when the technical implementation is abstracted through automation. Datavault Builder has combined its Data Vault driven Data Warehouse approach with a standardized development process that allows to scale and allocate development resources flexible.
Quickly develop your own Data Warehouse. Rely on the visual element of Datavault Builder to facilitate the collaboration between business users and IT for fully accepted and sustainable project outcomes. Immediately lay the foundation for new reports or integrate new sources of data in an agile way. Deliver new requirements and features with fully automated deployment. Agile Data Warehouse development and CI/CD become a reality.
Lees minderDataOps in de praktijk
Het hebben van de juiste data op de juiste plaats op het juiste moment met de juiste kwaliteit is belangrijk voor het ondersteunen van zakelijke beslissingen, optimalisatie, automatisering en het voeden van AI-modellen. Net als bij software ontwikkeling, wilt u snel nieuwe functionaliteiten van hoogwaardige kwaliteit opleveren. Nieuwe data, nieuwe inzichten, nieuwe AI modellen wilt u niet maandelijks, maar wanneer ze klaar zijn beschikbaar stellen aan de gebruiker. Dat is wat DataOps in de theorie kan bewerkstelligen. Maar in de praktijk loopt men tegen heel wat uitdagingen aan die het een stuk moeilijker maken het DataOps proces in een organisatie te effectueren. Hoe om te gaan met bijv. development sandboxes en representatieve testdata over systemen heen.
In deze sessie tonen Niels Naglé en Vincent Goris wat DataOps is en dat het niet gewoon DevOps voor data is. Zij gaan in op de unieke uitdagingen, oplossingen voor deze uitdagingen en hun lessons learned.
- Hoe verhoudt DataOps zich tot DevOps en waar zitten de verschillen?
- Een stappenplan om DataOps te implementeren in uw organisatie
- Het effect op je teams en organisatie
- Het belang van de metadata, datacatalogus en automatisering
- De uitdagingen en praktische oplossingen.
Dataminimalisatie: Een nieuw ontwerpprincipe voor data-architecturen
[Video intro] We hebben allemaal wel eens studies gezien die aangaven wat een enorme hoeveelheden data er per dag op deze planeet aangemaakt worden. Een groot deel van deze data is echter niet nieuwe maar gekopieerde data. In bestaande data-architecturen, zoals datawarehouses, wordt zeer veel gekopieerd, maar ook moderne architecturen zoals data lake en data hub rusten volledig op het kopiëren van data. Dit ongebreidelde kopiëren moet verminderd worden. We staan er niet altijd meer bij stil maar het kopiëren van data kent vele nadelen, waaronder hogere data-latency, complexe datasynchronisatie, complexere databeveiliging en dataprivacy, hogere ontwikkel en onderhoudskosten en verslechterde datakwaliteit. Het wordt tijd dat bij het opzetten van nieuwe data-architecturen het dataminimalisatieprincipe toegepast wordt. Dit betekent dat er gestreefd wordt naar het minimaliseren van gekopieerde data. Met andere woorden, gebruikers krijgen meer toegang tot originele data en er wordt overgestapt van data-by-delivery naar data-on-demand. Dit laatste komt overeen met wat er in de filmwereld gebeurd is: van video’s ophalen bij een winkel naar video-on-demand. Kortom, dataminimalisatie betekent dat we onze data gaan ‘Netflixen’.
- Het effect van dataminimalisatie op datawarehouses, data lakes en data hubs
- Het netwerk wordt de database
- Gebruik van translytical databases, analytical databases, datavirtualisatie om dataminimalisatie toe te passen
- Focus op business rules en niet op data storage
- Voorbeelden van het toepassen van dataminimalisatie op bestaande data-architecturen.
Cutting Data Fabric and Mesh to Measure
[Video introduction] The data warehouse is over thirty years old. The data lake just turned ten. So, is it time for something new? In fact, two new patterns have recently emerged—data fabric and data mesh—promising to revolutionise the delivery of BI and analytics.
Data fabric focuses on the automation of data delivery and discovery using artificial intelligence and active metadata. Data mesh has a very novel take on today’s problems, suggesting we must take a domain driven approach to development to eliminate centralised bottlenecks. Each approach has its supporters and detractors, but who is right? More importantly, should you be planning to replace your existing systems with one or the other?
In this session, Dr. Barry Devlin will explore what data fabric and mesh are, what they offer, and how they differ. We will compare them to existing patterns, such as data warehouse and data lake, data hub and even data lakehouse, using the Digital Information Systems Architecture (DISA) as a base. This will allow us to clearly see their strengths and weaknesses and understand when and how you might choose to move to one or the other.
What You Will Learn:
- Why we are seeing new patterns emerge
- What are data fabric and data mesh and how they differ
- Why you would want to use them
- What the roadblocks are to each
- Under what circumstances would you use them and where would you start.
Remote Data Modelstorming with BEAM: Lessons learnt from 2 years of data modeling training and consulting online
In this interactive session Lawrence Corr shares his thoughts and experiences on using visual collaboration platforms such as Miro and MURAL for gathering BI data requirements remotely with BEAM (Business Event Analysis and Modeling) for designing star schemas. Learn how visual thinking, narrative, a simple script with 7Ws and lots of real and digital Post-it ™ notes can get your stakeholders thinking dimensionally and capturing their own data requirements with agility in-person and at a distance.
Attendees will have the opportunity to vote visually on a virtual whiteboard and should have their smartphones ready to send Lawrence some digital notes to play the ‘7W game’ using the Post-it app.
This session will cover:
- Using BEAM (Business Event Analysis and Modeling) remotely to discover key business activity and define rich dimensional data sets
- Playing the 7W game as an icebreaker and introduction to BEAM
- Comparisons of key virtual whiteboard features in Miro, MURAL, InVision Freehand and LucidSpark
- Hybrid modelstorming – starting in-person with real Post-its, capturing work digitally and completing in the cloud
- BEAM modelstorming templates available in Miro and MURAL which you can using straight away
Driving measurable value in Established Industries with Traditional Machine Learning
[Video introduction] Developing a machine learning strategy designed to maximize business value in the age of Deep Learning
Deep Learning is so dominant in some discussions of AI and machine learning that many organizations feel that they need to try to keep up with the latest trends. But does it offer the best path for your organization? What is this technology all about and why should both executives and practitioners understand its history?
All business leaders know that they have to embrace analytics or be left behind. However, technology changes so rapidly that it is difficult to know who to hire, which technologies to embrace, and how to proceed. The truth is that traditional machine learning techniques are a better fit for more organizations than chasing after the latest trends. The hyped techniques are popular for a reason so leaders with a responsibility for analytics need to have a high-level understanding of them.
Learning objectives
- Learn what makes Deep Learning so powerful and what are its limitations
- Understand why for many use cases traditional machine learning continues to be a much better option
- Learn the use cases in established industries where machine learning is driving measurable value
- Learn the industries and use cases where Deep Learning has made recent revolutionary progress and why
- Discuss the implications of these approaches for hiring and managing your analytics teams
- Learn how to maximize the value of your analytics portfolio by choosing the right projects and assigning the ideal resources.
Lunchpauze
Building a Business-Driven Roadmap for Modern Cloud Data Architecture
[Video introduction] Companies rely on modern cloud data architectures to transform their organizations into the agile analytics-driven cultures needed to be competitive and resilient. The modern cloud reference architecture applies data architecture principles into cloud platforms with current database and analytics technologies. However, many organizations quickly get in over their head without a carefully prioritized and actionable roadmap aligned with business initiatives and priorities. Building such a roadmap follows a step-by-step process that produces a valuable communication tool for everyone to deliver together.
This session will cover the four significant steps to align the data strategy and roadmap with the business. We’ll start with translating business strategy into data and analytics strategies with the Enterprise Analytics Capabilities Framework. This is followed with a logical modern cloud reference data architecture that can leverage agile architecture techniques for implementation as a modern data infrastructure on any cloud, hybrid or multi-cloud environment. This will provide the basis for drilling deeper into architecture patterns and developing proficiency with DataOps and MLOps.
This session will cover:
- How to identify and translate business priorities into analytic capabilities
- How the Enterprise Analytics Capabilities Framework guides architecture roadmaps
- Modern data architecture components: data lake, DW, data hubs, and sandboxes
- Modern architecture patterns: polyglot persistence, data lakehouse, data fabric, data mesh
- Modern integration architecture components: ingestion, data pipelines, event streaming
- Modern data infrastructure on AWS, Azure, and GCP.
Creating a Predictable and Mature BI Value Stream with Data Automation
Do you want to generate more value out of your data with less effort and cost?
This presentation will help you to reduce your time to market and increase your development efficiency. Erik discusses projects he has been involved in and explains how he was able to accelerate and streamline them using WhereScape. His main focus will be on a Data Vault 2.0 implementation he was involved in at a large bank.
WhereScape Data Automation software accelerates the design, build, documentation and management of complex data ecosystems. It automates repetitive manual tasks such as hand coding and enables developers can produce architectures in a fraction of the time, without human error.
Lees minderOpenness and ownership - the balancing act of enterprise data
[Video-introduction] The role of data in business processes has never been more critical. But as we develop new technologies and new skills it feels like we meet new dilemmas at every turn. Concerns about governance and compliance seem to conflict with demands for agility and collaboration. The expanding scope of the data we work with brings new ethical concerns to light.
So, are we doomed to a constant struggle for control of our data assets? I don’t think so. In this session, I’ll sketch out a provocative, but hopefully useful idea – that we have confused ownership and accountability, governance and compliance, openness and collaboration. We’ll look at some potentially new approaches, which aim to resolve some of the complex puzzles of enterprise data.
- Getting to know your enterprise data – do you really know what you have?
- Why would anyone share enterprise data?
- Security, privacy, governance, compliance – the essential differences
- The process of data sharing
- Catalogs vs Warehouses vs Lakes
- Roles and responsibilities in data ownership.
Concept Modelling - An Angst-Free Framework for Engaging your Executives
[Video introduction] We have all heard “This is the golden age of data” and “Data is the new oil” but that does not necessarily mean your senior executives are anxious to participate in Conceptual Data Modelling / Concept Modelling. The speaker recently had an interesting exception to the reluctance of senior executives to participate in data modelling. Led by the Chief Strategy Officer, a group of C-level executives and other senior leaders at a mid-size financial institution asked Alec to facilitate three days of Concept Modelling sessions.
Fundamentally, a Concept Model is all about improving communication among various stakeholders, but the communication often gets lost – in the clouds, in the weeds, or somewhere off to the side. This is bad enough in any modelling session, but is completely unacceptable when working at the C-level. Drawing on forty years of successful consulting and modelling experience, this presentation will illustrate core techniques and necessary behaviors to keep even your senior executives involved and engaged,
Key points in the presentation include:
- What got the executives interested in the first place
- How we prepared for and structured the sessions
- How we communicated with the executives before, during, and after the sessions
- An angst-free framework for developing definitions
- The evolution of the Concept Model evolved, and the crucial findings
- The executives’ reaction during the retrospective.
Profiting with Practical Supervised Machine Learning
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
- When to apply supervised or unsupervised modeling methods
- Options for inserting machine learning into the decision making of your organization
- How to use multiple models for value estimation and classification
- How to properly prepare data for different kinds of supervised models
- Interpret model coefficients and output to translate across platforms and languages, including the widely used Predictive Modeling Markup Language (PMML)
- Explore the pros and cons of “black box” models including ensembles
- How data preparation must be automated in parallel with the model if deployment is to succeed
- Compare model accuracy scores to model propensity scores that drive decisions at deployment.
Who is it for?
- Analytic Practitioners
- Data Scientists
- IT Professionals
- Technology Planners
- Consultants; Business Analysts
- Analytic Project Leaders.
Course Description
1. How to choose the best machine learning strategy
- How supervised learning compares to other options
- The reality and the hype regarding machine learning
- What are the classic traditional machine learning techniques?
- The two main types of supervised machine learning
2. Decision Trees: Still the best choice for many everyday challenges
- Exploring and interpreting insights with a completed decision tree model
- A brief primer on the various types of decision tree algorithms
- Strategic considerations and advantages of decision trees
- Deployment and bringing you ML models into production
3. Introducing the CART decision tree
- CART under the hood
- Processing various variable types with CART
- Understanding pruning
- How CART handles missing data with “surrogates”
- The “Roshoman effect” in machine learning
4. Additional Supervised Techniques
- Comparing linear regression to neural networks
- How to embrace the benefits of neural networks without actually using them
- Regression trees: how to use decision trees to address regression problems
- What are “ensemble” methods and why are they so popular?
- Keeping your solutions practical and transparent
Lees minder
Data and Analytics as a Line of Business
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
- The nature of data as asset
- The importance of analytics to realising that value
- Ways in which data and analytics can be developed as a line of business
- Simple models for data and analytics as a line of business
- The value of benchmarking
- How to encourage and support internal and external communities
- How to be strategically more agile when creating data and analytics lines of business
- Addressing ethics and governance concerns.
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
- User dashboards
- Predictive analytics and alerting
- Benchmarking and associative analytics
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
- CIOs, CTOs, analytics leaders and data management leaders
- Data scientists, and data analysts.
The Data-Process Connection
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
- Concept modelling essentials – things, facts about things, and the policies and rules governing things
- “Guerrilla modelling” – how to get started on concept modelling without anyone realising it
- Naming conventions and graphic guidelines – ensuring correctness, consistency, and readability
- Concept models as a starting point for process discovery
- Practical examples of concept modelling supporting process work, architecture work, and commercial software selection.
DataOps for Better and Faster Analytics
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
- The challenges in current data environments and IT
- What DataOps is and how it differs from other approaches
- Which principles and technologies to focus on initially
- How to adopt DataOps to speed analytics development and delivery
- How to continuously engineer, deploy, and operationalize data pipelines with automation and monitoring
- Setting expectations and planning for DataOps maturity.
Course Description
1. Understanding why we need to change
- How business Analytics has changed from diagnostic to predictive
- How data sources are increasing
- The impact of data integration on Data Management
- Changes in IT development methodologies and organizations
- Supporting new data products
- How DataOps is emerging as the next era
- Reviewing the Agile Manifesto
- Important aspect of DevOps
- Review statistical process control for DataOps
- How DataOps can embed Data Quality and Data Governance
- Defining DataOps and the DataOps Manifesto
- Comparing DevOps to DataOps
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
- How Connectors can make a difference
- How engineered data pipelines will work
- How “data drift” will impact data work
- Set up repositories for Data Governance and Data Quality
- The role of data hubs and MDM
- How to set up measurements correctly
- Leveraging DataOps Platform instrumentation
The 2 key processes to focus on for DataOps
- Components needed to deliver on business ideation
- Building data and Analytics deliverables with DataOps
3. Managing DataOps: defining Metrics and Maturity Models
- Defining Metrics for Data and Analytics delivery
- Key DataOps metrics
- How to leverage reusability metrics
- Reviewing metrics for process improvement
- Maturity stage of DataOps adoption
- CMMI-based Maturity Model
- IBM Maturity Model.
Profiting with Practical Supervised Machine Learning
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
- When to apply supervised or unsupervised modeling methods
- Options for inserting machine learning into the decision making of your organization
- How to use multiple models for value estimation and classification
- How to properly prepare data for different kinds of supervised models
- Interpret model coefficients and output to translate across platforms and languages, including the widely used Predictive Modeling Markup Language (PMML)
- Explore the pros and cons of “black box” models including ensembles
- How data preparation must be automated in parallel with the model if deployment is to succeed
- Compare model accuracy scores to model propensity scores that drive decisions at deployment.
Who is it for?
- Analytic Practitioners
- Data Scientists
- IT Professionals
- Technology Planners
- Consultants; Business Analysts
- Analytic Project Leaders.
Course Description
1. How to choose the best machine learning strategy
- How supervised learning compares to other options
- The reality and the hype regarding machine learning
- What are the classic traditional machine learning techniques?
- The two main types of supervised machine learning
2. Decision Trees: Still the best choice for many everyday challenges
- Exploring and interpreting insights with a completed decision tree model
- A brief primer on the various types of decision tree algorithms
- Strategic considerations and advantages of decision trees
- Deployment and bringing you ML models into production
3. Introducing the CART decision tree
- CART under the hood
- Processing various variable types with CART
- Understanding pruning
- How CART handles missing data with “surrogates”
- The “Roshoman effect” in machine learning
4. Additional Supervised Techniques
- Comparing linear regression to neural networks
- How to embrace the benefits of neural networks without actually using them
- Regression trees: how to use decision trees to address regression problems
- What are “ensemble” methods and why are they so popular?
- Keeping your solutions practical and transparent
Lees minder
Data and Analytics as a Line of Business
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
- The nature of data as asset
- The importance of analytics to realising that value
- Ways in which data and analytics can be developed as a line of business
- Simple models for data and analytics as a line of business
- The value of benchmarking
- How to encourage and support internal and external communities
- How to be strategically more agile when creating data and analytics lines of business
- Addressing ethics and governance concerns.
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
- User dashboards
- Predictive analytics and alerting
- Benchmarking and associative analytics
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
- CIOs, CTOs, analytics leaders and data management leaders
- Data scientists, and data analysts.
The Data-Process Connection
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
- Concept modelling essentials – things, facts about things, and the policies and rules governing things
- “Guerrilla modelling” – how to get started on concept modelling without anyone realising it
- Naming conventions and graphic guidelines – ensuring correctness, consistency, and readability
- Concept models as a starting point for process discovery
- Practical examples of concept modelling supporting process work, architecture work, and commercial software selection.
DataOps for Better and Faster Analytics
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
- The challenges in current data environments and IT
- What DataOps is and how it differs from other approaches
- Which principles and technologies to focus on initially
- How to adopt DataOps to speed analytics development and delivery
- How to continuously engineer, deploy, and operationalize data pipelines with automation and monitoring
- Setting expectations and planning for DataOps maturity.
Course Description
1. Understanding why we need to change
- How business Analytics has changed from diagnostic to predictive
- How data sources are increasing
- The impact of data integration on Data Management
- Changes in IT development methodologies and organizations
- Supporting new data products
- How DataOps is emerging as the next era
- Reviewing the Agile Manifesto
- Important aspect of DevOps
- Review statistical process control for DataOps
- How DataOps can embed Data Quality and Data Governance
- Defining DataOps and the DataOps Manifesto
- Comparing DevOps to DataOps
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
- How Connectors can make a difference
- How engineered data pipelines will work
- How “data drift” will impact data work
- Set up repositories for Data Governance and Data Quality
- The role of data hubs and MDM
- How to set up measurements correctly
- Leveraging DataOps Platform instrumentation
The 2 key processes to focus on for DataOps
- Components needed to deliver on business ideation
- Building data and Analytics deliverables with DataOps
3. Managing DataOps: defining Metrics and Maturity Models
- Defining Metrics for Data and Analytics delivery
- Key DataOps metrics
- How to leverage reusability metrics
- Reviewing metrics for process improvement
- Maturity stage of DataOps adoption
- CMMI-based Maturity Model
- IBM Maturity Model.
Profiting with Practical Supervised Machine Learning
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
- When to apply supervised or unsupervised modeling methods
- Options for inserting machine learning into the decision making of your organization
- How to use multiple models for value estimation and classification
- How to properly prepare data for different kinds of supervised models
- Interpret model coefficients and output to translate across platforms and languages, including the widely used Predictive Modeling Markup Language (PMML)
- Explore the pros and cons of “black box” models including ensembles
- How data preparation must be automated in parallel with the model if deployment is to succeed
- Compare model accuracy scores to model propensity scores that drive decisions at deployment.
Who is it for?
- Analytic Practitioners
- Data Scientists
- IT Professionals
- Technology Planners
- Consultants; Business Analysts
- Analytic Project Leaders.
Course Description
1. How to choose the best machine learning strategy
- How supervised learning compares to other options
- The reality and the hype regarding machine learning
- What are the classic traditional machine learning techniques?
- The two main types of supervised machine learning
2. Decision Trees: Still the best choice for many everyday challenges
- Exploring and interpreting insights with a completed decision tree model
- A brief primer on the various types of decision tree algorithms
- Strategic considerations and advantages of decision trees
- Deployment and bringing you ML models into production
3. Introducing the CART decision tree
- CART under the hood
- Processing various variable types with CART
- Understanding pruning
- How CART handles missing data with “surrogates”
- The “Roshoman effect” in machine learning
4. Additional Supervised Techniques
- Comparing linear regression to neural networks
- How to embrace the benefits of neural networks without actually using them
- Regression trees: how to use decision trees to address regression problems
- What are “ensemble” methods and why are they so popular?
- Keeping your solutions practical and transparent
Lees minder
Data and Analytics as a Line of Business
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
- The nature of data as asset
- The importance of analytics to realising that value
- Ways in which data and analytics can be developed as a line of business
- Simple models for data and analytics as a line of business
- The value of benchmarking
- How to encourage and support internal and external communities
- How to be strategically more agile when creating data and analytics lines of business
- Addressing ethics and governance concerns.
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
- User dashboards
- Predictive analytics and alerting
- Benchmarking and associative analytics
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
- CIOs, CTOs, analytics leaders and data management leaders
- Data scientists, and data analysts.
The Data-Process Connection
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
- Concept modelling essentials – things, facts about things, and the policies and rules governing things
- “Guerrilla modelling” – how to get started on concept modelling without anyone realising it
- Naming conventions and graphic guidelines – ensuring correctness, consistency, and readability
- Concept models as a starting point for process discovery
- Practical examples of concept modelling supporting process work, architecture work, and commercial software selection.
DataOps for Better and Faster Analytics
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
- The challenges in current data environments and IT
- What DataOps is and how it differs from other approaches
- Which principles and technologies to focus on initially
- How to adopt DataOps to speed analytics development and delivery
- How to continuously engineer, deploy, and operationalize data pipelines with automation and monitoring
- Setting expectations and planning for DataOps maturity.
Course Description
1. Understanding why we need to change
- How business Analytics has changed from diagnostic to predictive
- How data sources are increasing
- The impact of data integration on Data Management
- Changes in IT development methodologies and organizations
- Supporting new data products
- How DataOps is emerging as the next era
- Reviewing the Agile Manifesto
- Important aspect of DevOps
- Review statistical process control for DataOps
- How DataOps can embed Data Quality and Data Governance
- Defining DataOps and the DataOps Manifesto
- Comparing DevOps to DataOps
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
- How Connectors can make a difference
- How engineered data pipelines will work
- How “data drift” will impact data work
- Set up repositories for Data Governance and Data Quality
- The role of data hubs and MDM
- How to set up measurements correctly
- Leveraging DataOps Platform instrumentation
The 2 key processes to focus on for DataOps
- Components needed to deliver on business ideation
- Building data and Analytics deliverables with DataOps
3. Managing DataOps: defining Metrics and Maturity Models
- Defining Metrics for Data and Analytics delivery
- Key DataOps metrics
- How to leverage reusability metrics
- Reviewing metrics for process improvement
- Maturity stage of DataOps adoption
- CMMI-based Maturity Model
- IBM Maturity Model.
Profiting with Practical Supervised Machine Learning
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
- When to apply supervised or unsupervised modeling methods
- Options for inserting machine learning into the decision making of your organization
- How to use multiple models for value estimation and classification
- How to properly prepare data for different kinds of supervised models
- Interpret model coefficients and output to translate across platforms and languages, including the widely used Predictive Modeling Markup Language (PMML)
- Explore the pros and cons of “black box” models including ensembles
- How data preparation must be automated in parallel with the model if deployment is to succeed
- Compare model accuracy scores to model propensity scores that drive decisions at deployment.
Who is it for?
- Analytic Practitioners
- Data Scientists
- IT Professionals
- Technology Planners
- Consultants; Business Analysts
- Analytic Project Leaders.
Course Description
1. How to choose the best machine learning strategy
- How supervised learning compares to other options
- The reality and the hype regarding machine learning
- What are the classic traditional machine learning techniques?
- The two main types of supervised machine learning
2. Decision Trees: Still the best choice for many everyday challenges
- Exploring and interpreting insights with a completed decision tree model
- A brief primer on the various types of decision tree algorithms
- Strategic considerations and advantages of decision trees
- Deployment and bringing you ML models into production
3. Introducing the CART decision tree
- CART under the hood
- Processing various variable types with CART
- Understanding pruning
- How CART handles missing data with “surrogates”
- The “Roshoman effect” in machine learning
4. Additional Supervised Techniques
- Comparing linear regression to neural networks
- How to embrace the benefits of neural networks without actually using them
- Regression trees: how to use decision trees to address regression problems
- What are “ensemble” methods and why are they so popular?
- Keeping your solutions practical and transparent
Lees minder
Data and Analytics as a Line of Business
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
- The nature of data as asset
- The importance of analytics to realising that value
- Ways in which data and analytics can be developed as a line of business
- Simple models for data and analytics as a line of business
- The value of benchmarking
- How to encourage and support internal and external communities
- How to be strategically more agile when creating data and analytics lines of business
- Addressing ethics and governance concerns.
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
- User dashboards
- Predictive analytics and alerting
- Benchmarking and associative analytics
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
- CIOs, CTOs, analytics leaders and data management leaders
- Data scientists, and data analysts.
The Data-Process Connection
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
- Concept modelling essentials – things, facts about things, and the policies and rules governing things
- “Guerrilla modelling” – how to get started on concept modelling without anyone realising it
- Naming conventions and graphic guidelines – ensuring correctness, consistency, and readability
- Concept models as a starting point for process discovery
- Practical examples of concept modelling supporting process work, architecture work, and commercial software selection.
DataOps for Better and Faster Analytics
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
- The challenges in current data environments and IT
- What DataOps is and how it differs from other approaches
- Which principles and technologies to focus on initially
- How to adopt DataOps to speed analytics development and delivery
- How to continuously engineer, deploy, and operationalize data pipelines with automation and monitoring
- Setting expectations and planning for DataOps maturity.
Course Description
1. Understanding why we need to change
- How business Analytics has changed from diagnostic to predictive
- How data sources are increasing
- The impact of data integration on Data Management
- Changes in IT development methodologies and organizations
- Supporting new data products
- How DataOps is emerging as the next era
- Reviewing the Agile Manifesto
- Important aspect of DevOps
- Review statistical process control for DataOps
- How DataOps can embed Data Quality and Data Governance
- Defining DataOps and the DataOps Manifesto
- Comparing DevOps to DataOps
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
- How Connectors can make a difference
- How engineered data pipelines will work
- How “data drift” will impact data work
- Set up repositories for Data Governance and Data Quality
- The role of data hubs and MDM
- How to set up measurements correctly
- Leveraging DataOps Platform instrumentation
The 2 key processes to focus on for DataOps
- Components needed to deliver on business ideation
- Building data and Analytics deliverables with DataOps
3. Managing DataOps: defining Metrics and Maturity Models
- Defining Metrics for Data and Analytics delivery
- Key DataOps metrics
- How to leverage reusability metrics
- Reviewing metrics for process improvement
- Maturity stage of DataOps adoption
- CMMI-based Maturity Model
- IBM Maturity Model.
Tijdgebrek? Volg één dag!
Heeft u slechts één dag de tijd om de DW&BI Summit bij te wonen? Maak een keuze uit de onderwerpen en kies de dag die het beste past. De onderwerpen zijn zodanig gekozen dat zij op zich zelf staan zodat het ook mogelijk is om alleen de eerste dag van het congres te volgen óf om dag twee te volgen zonder dat u dag één heeft bijgewoond. Deelnemers aan het congres hebben bovendien nog enkele maanden toegang tot de video opnames van de gekozen dag dus als u een sessie moet missen, is er geen man overboord.
29 maart
Plenair Jan Henderyckx
Plenair Arjen Bouman
Plenair
Plenair Erik Fransen, Petr Beles
Plenair Rick van der Lans
30 maart
Plenair Lawrence Corr
Plenair Keith McCormick
Plenair, Zaal 1
Plenair John O’Brien
Plenair Erik van der Hoeven
Plenair Alec Sharp
Workshop