Home > Big Data > Big Data: Commoditisation, protection and structural shifts – using data in 2018

Big Data: Commoditisation, protection and structural shifts – using data in 2018

2018 is the year companies must harness the use of bots and AI, alongside data to drive an improvement in customer experience.

2018 is set to be a year of subtle but powerful technological change in our use of data. We will see the first trials of 5G revolutionising mobile connectivity, deeper integration of automation in the workplace and the introduction of the General Data Protection Regulation (GDPR) in May. Each will be felt gradually but will fundamentally change how technology businesses in Europe function. However, there is opportunity in 2018 for companies to harness these technological shifts, such as through the use of bots and AI, alongside data to drive an improvement in customer experience.

Consumer data as consumer commodity

With GDPR coming into force in May, we can expect data security to remain a priority in 2018. There will be more onus on brands being responsible with, and respectful of, customers’ data and that’s a great thing. The big data breaches of 2017 that have affected a number of companies will have prompted the whole industry to up their cyber security game.

For companies with access to proprietary data, this can be a double-edged sword. While they gain a competitive advantage from holding this data, they will also need to be scrupulous with how the data entrusted to them is being used and protected. However, with big players such as Facebook and Google amassing vast resources, we are already seeing the benefits of data asymmetry. Data asymmetry is an evolution of the idea that “he who learns fastest wins” to “he who has the most data learns fastest, and therefore wins”.

As consumers become more cognisant of this, organisations will need to work fast, test and iterate their customer experience, and bring as much value as possible if they want people to continue sharing data with them. Not only will regulations enforce this, but individuals will become more reluctant to share vast amounts of data without feeling a fair and valuable return.

That being said, proactive data sharing could increase. When we launched BusyBot – Trainline’s crowdsourced bot that allows travellers to see which part of the train has seats available – I never anticipated the level of engagement we have seen. 26,000 people in the UK now submit information to BusyBot on a daily basis to help their fellow rail travellers.

How bots will boost people power

Perhaps a more common use of bots – one we all recognise – is within the realm of customer service. If 2017 was the year of voice interaction, the use of AI and bots really took off, 2018 will be the year we see that process refined. Whether through smart home products, our phones, a web app or Twitter, today we can interact with bots in countless ways. However, generally speaking, we’re always pretty aware of what’s a bot and what’s not. As more people interact with bots and share more data with them in 2018 – that distinction will begin to blur.

Every message sent to a bot is an act of data sharing. The benefits of bots are circular – for the user they can receive a fast response, for the bot and the business, the data fed to a bot nurtures it; improving its utility for future interactions. However, as much as those improvements will mean that the jarring disconnect of battling a bot repeating nonsense will become a thing of the past, one of the most distinct benefits of improving bots will be them knowing when a human is required to take control. In 2018 we will see a growing teamwork between bots and people – customer support will be improved by bots in the way they can elevate human interactions.

Cloudy with a chance of late adoption

As much as we are likely to see an ever-growing number of customer-facing uses of data and bots, these products will be impossible without the correct technological structures in place. Successful apps generate terabytes upon terabytes of data every day – and this needs to be stored somewhere. Building on-premise architectures (for example, large on-site servers and data storage facilities) for this kind of data management is not only costly, but also naïve and short-sighted. That huge, monolithic, legacy database that three (or perhaps more accurately, ten) years ago seemed like a good solution, simply won’t cut it in 2018.

Up until this point cloud has been brilliant. It has enabled greater agility, increased organisational efficiencies, and, in many cases, off-loaded the needed for bulky, costly IT infrastructure. However, it hasn’t been an essential technology for businesses to adopt. In 2018 that will change. The data-lake available to technology companies will be ever growing. In order to incrementally and organically use that perpetually evolving resource you need agility that can only be afforded through the Cloud.

Ultimately, if the past few years have seen the importance and awareness of the collection and use of big data grow, 2018 is primed to be the year in which there is less onus on the data itself but a growing importance of the structures that govern, protect and power its use. In 2018, these structures will be the backbone of the most important customer facing innovations powered by data.