Latest News

Data virtualisation: it’s time to think differently

Alberto Pan, Chief Technical Officer at Denodo, explains why a different approach is needed to data virtualisation

In our current landscape, data is the lifeblood of every organisation, regardless of size or sector. It has become a crucial part of doing business. By harnessing it effectively, companies can boost productivity and improve decision making, enabling them to stand out from the competition and offer real value to their customers.

The challenge for businesses has always been how to harness the massive amounts of data being produced in order to better understand and utilise it. And, with IDC predicting worldwide data levels are set to grow 61% to 175 zettabytes by 2025, it’s a challenge that is not set to go away anytime soon.

Over the years, data virtualisation has emerged as an answer to this challenge. Through granting organisations with a single, logical view of all the information they store – whether it be structured or unstructured, in the cloud or on premise – data virtualisation has become the go-to tool for many.

It enables data to be accessed easily through front-end solutions, such as applications and dashboards, without the user having to know its exact storage location or how it is formatted. Needless to say, data virtulisation has opened up many doors for those looking to achieve better visibility and therefore derive true value from their data.

Yet, although certain benefits of data virtualisation are well publicised, many organisations are still missing out on its full potential by limiting its use to within the logical data warehouse. Whilst logical data warehouses are both powerful and flexible, it’s time that businesses started thinking outside the box in order to realise the benefits of data virtualisation beyond them.

One small step for businesses; one giant leap for data virtualisation

Since the term ‘logical data warehouse’ was first coined over a decade ago, data virtualisation has become synonymous with it. So much so in fact, that some organisations are finding themselves almost limited in the way they think about data virtualisation.

As we enter into a new decade, it’s a perfect time for organisations to shift their way of thinking. After all, what could be a relatively small shift in mind set, could herald a new era in the way that businesses think about and use data virtualisation.

Boardroom leaders and employees alike need to begin to consider this type of technology outside of the logical data warehouse. Moving forward, data virtualisation should be thought of under a new definition; as a ‘unified data delivery platform’. It is only then that businesses can fully reap the rewards.

Under this new definition, data virtualisation has the power to unify not only all the information stored in data warehouses but also in data lakes, data marketplaces, streaming data and any other data delivery system. It is essentially like a strong chain; tearing down silos and making all data accessible across an organisation.

This improved accessibility and heightened visibility enables teams to access data quickly and from anywhere – and the knowledge that they glean from this informs business decisions and helps to increase overall productivity and ensure operations are running as smoothly as possible.

The next chapter in data virtualisation

In order to achieve these benefits and realise the full power of data vitualisation, organisations must first change traditional mindsets. When thought about as a unified data delivery platform which can be used beyond the logical data warehouse, data virtualisation can add real value to other architectures within an organisation.

For example, under this way of thinking, data virtualisation can help companies to:

• Create data service layers – This can grant an increased level of control by preventing project teams from creating their own siloed data sets or using non-certified data sources. Instead developers throughout an organisation are able to access and reuse consistent data sets. This ultimately means that data can be accessed in a simple, agile and easy-to-use way to inform decision making, but that this data is also still effectively secured and governed within an organisation

• Speed up the journey to cloud – Although cloud is often seen as imperative to a modern infrastructure, implementing it can sometimes prove difficult. It can take an extremely long time and often data can become even more fragmented in the process, with some going over to cloud and some staying on-premise. Data virtualisation, with its abstraction capabilities, can help to combat this by facilitating the migration of legacy architecture to cloud, then the evolution of this migrated architecture to more modern cloud analytics platforms, offering simplified access to data in hybrid architectures

• Open up the big data world – By enabling the data residing in big data platforms to be easily accessed and exploited, data virtualisation can enable even those with no experience to benefit from big data. It also supports big data initiatives by enabling data scientists to focus on analytics rather than data integration.

There’s no doubt about it, as a unified delivery platform, data virtualisation can help to give organisations a much-needed competitive edge. Under this new way of thinking, it’s no longer limited to the logicial data warehouse, opening up the door to a whole new world of possibilities.