In my view, the data tech stack must differentiate between two sets of questions:
1. Is it sufficient only to measure what is happening in others systems? *(Where systems = technology and human processes built to accomplish some goal)*
1. If yes - the data function is *receiving*, but not pushing
2. If no, the data function must both receive and push; it is both a recipient and a source, or courier, of data between systems
2. Are the surrounding systems mature? Is there a mature process to achieve system maturity?
1. If yes - iterations can be lower, more predictable, and an emphasis should be placed on stability, governance, and precision
2. If no - iterations must be fast, cycle times short, and an emphasis should be placed on rapid tests, prototypes, and failures
These two questions should guide the overall architecture. It influences the team -- who you hire, what skills you look for, and what weaknesses are acceptable. It dictates the tool stack: what types of technology is needed to receive, or receive and send, what kind of delivery layer is needed, and how quickly changes need to be incorporated.
Different leaders and people thrive in different situations. This is normal and natural. Some businesses progress, and so would answer these questions differently at different points in the lifecycle. Others do not, and stay comfortably on one side. It is natural that new leadership would be needed for that progression. Strategically, the business must incentive and allow for this kind of progression when needed, or else risk choking itself with a misaligned data strategy.
As a rule, I don't believe in tools over process. The above process should lead you in a direction, and highlight which sorts of tools might achieve the needed architecture.