Updated: Aug 5
Nowadays data is everywhere but discovering and combining it to extract value can be an issue, due to different data sources, large volumes of data and the incapability of having data in real-time, and this is a scenario where data virtualisation is a game changer, providing a virtual abstraction layer where the information from all the sources, such as structured, semi-structured and non-structured systems, can be viewed, treated on the fly and combined no matter the source.
This layer can have many use cases in the organisation:
Abstracting the business users from data sources technical complexity;
Creating or normalising a semantic layer that allows business concepts standardisation across multiple departments, for example, a client is somebody that buys a product and this definition should be accepted by all departments;
Avoid data replication. Data is fetched when queried and in real-time directly to the source;
Increasing the traceability, so the user can see where a specific field is coming from and where the view is being used;
Acting as a single access point to the data, providing the necessary clearance specific to each role or application.
With this tool, the time-to-solution can be 5 - 10 times faster than traditional data integration tools and ETL processes, since it is fast on accessing data and very agile. It adapts rapidly to changes, without waiting weeks or months for new information, reducing the costs associated to data integration and consolidation from 50 - 75% and increasing the return on investment.
by Mariana Pinto
Business Intelligence Consultant @Passio Consulting