The rules used to be that data architectures had to be designed independently of the technologies and products; first, design the data architecture and then select the right products. This was achievable because many products were reasonably interchangeable. But is that still possible? In recent years we have been confronted with an unremitting stream of technologies for processing, analyzing and storing data, such as Hadoop, NoSQL, NewSQL, GPU-databases, Spark and Kafka. These technologies have a major impact on data processing architectures, such as data warehouses and streaming applications. But most importantly, many of these products have very unique internal architectures and directly enforce certain data architectures. So, can we still develop a technology independent data architecture? In this session the potential influence of the new technologies on data architectures is explained.
- Are we stuck in our old ideas about data architecture?
- From generic to specialized technologies
- Examples of technologies that enforce a certain data architecture
- What is the role of software generators in this discussion?
- New technology can only be used optimally if the data architecture is geared to it.