Banks need to keep up with data explosion

This article is more than 12 months old

Use big data to gain insight into needs

The explosion in data sources, combined with the coming of age of data science and open-source data technologies, has created a clear divide between banks that are ready to embrace the data revolution, and those that are not.

Banks need to re-invent how they work, given the exponential speed at which technology is evolving, and make harnessing data assets a key priority.

Our data-driven world raises questions about privacy and who owns the data when someone shares their personal information. This debate has existed since the advent of the Internet.

Organisations that collect big data want to run analytics to understand customers and improve their services, while others are advocating for users to regain data sovereignty.

Collecting and storing data, in addition to abiding by ever-increasing levels of privacy and regulatory compliance, make for a deeply complex operating environment for banks.

Some have suggested that privacy will become mathematically impossible in a matter of years when artificial intelligence (AI), combined with data analytics, can start to plug knowledge gaps by inferring from known data.

What is important is making sure people have more direct control over their data and can choose what is made available.

Generally, people do not mind giving out data if they get something in return. As long as customers are given a choice, see the benefits and are asked for their consent, they are more likely to share their data.

Banks and other service providers have to tread a fine line between being helpful and being intrusive.

When used correctly, big data is powerful.

Our team in India has worked out how data analytics could be used to identify potential instances of money laundering and address financial crime risk.

With the rise in regulation since the 2008 financial crisis, we are also exploring solutions to improve reporting that meets requirements of central banks.


We have invested to build our own "data lake", a state-of-the-art platform that allows us to embrace the data revolution and depart from traditional data warehouses that were limited, expensive and slow to use.

The success of any venture into big data depends on data you can trust.

Indeed, data quality is one of the biggest problems, exacerbated by the diverse nature of data coming from both internal and external data sources.

Making sense of data in a unified model is crucial. Without that, we end up with data but not information.

As a bank, we are focusing on the root of the problem.

We are looking at open standards such as the Financial Industry Business Ontology to help us achieve this.

There are also techniques in the areas of machine learning and AI that are accelerating the convergence of data models across disparate sources.

Despite the prevalence of smart algorithms capable of using data to derive intelligent conclusions, I am of the view that we are years away from being able to be rely on machines to run our lives.

A colleague described a situation in which he received a threatening call from a debt collection agency, only to find out that the machine had matched him with the data of someone else with the same name.

Clearly, banks and many institutions still require experts in data quality governance.

While it is important for banks to strive to become data-driven, our business is not a technical machine with input and output factors. Big data is a means to an end, not the end.

We do not measure success by the amount of data we are able to harness or the number of apps we are able to invent, but by the extent to which big data helps us gain more insight into the human needs of our clients.

I am a firm believer that with the advancements in machine learning, humanity will still be the architect of our world.

The writer is group chief information officer of Standard Chartered Bank. This article was published in The Business Times yesterday.