Aviva Investors interview with Greg Davies
Behavioural science is becoming increasingly influential in finance, as investors seek to understand how human biases affect markets and pricing. Combined with the rise of data-crunching algorithms designed to aid an individual’s financial choices, these psychological insights could prove transformative for the industry.
So what does the future hold for behavioural finance in an era of artificial intelligence and data analytics? Greg Davies is better placed than most to answer this question. Davies spent more than a decade as head of behavioural-quant finance at Barclays, where he built and led the world’s first applied behavioural finance team, before moving on to found behavioural consultancy Centapse. In 2017 he joined research firm Oxford Risk as head of behavioural science.
In this Q&A, Davies explains how new data-driven technologies can be used to correct investors’ irrational behaviour. The idea to not to replace human choice with impersonal algorithms, but to deploy technology to make people aware of the unconscious impulses that lead them to make bad decisions. Used in this way, digital tools can “free people up to become consistently their best selves,” Davies argues.
Aviva Investors: Why has behavioural finance risen in prominence in recent years?
Greg Davies: Academic economists did their best to write any form of ‘human-ness’ out of their theories for a long time. And yet the fact that financial decisions are motivated by personality and emotion should not be at all surprising.
During the financial crisis it became very apparent that the role of behaviour had been vastly underestimated in all aspects of applied finance. It’s fair to say behavioural economics was the only economic field that came out of the crisis with better PR than it went in with. There was more of a story to tell. Suddenly behavioural economists had a very tangible example of what happens when you don’t take people’s behaviour into account.
Behavioural economists have drawn attention to the role of cognitive biases in influencing human behaviour. Are there any biases that are particularly important for financial advisers to consider?
I think there has been something of a ‘bias’ bias in the industry. There has been a tendency to throw the word ‘bias’ on the name of every new psychological anomaly and add it to a list. Think of it this way: most people essentially face a trade-off between the right thing to do and the comfortable thing to do. Biases influence what feels comfortable and intuitive.
Sometimes what feels comfortable is the same as a good decision, but unfortunately that is rarely the case when it comes to financial decision-making. Most non-professional investors leave their money in cash for much longer than they need to because it is psychologically uncomfortable to expose yourself to risk; they are effectively buying emotional comfort at the expense of long-term returns.
There is an interesting analogy in football. In penalty shootouts, goalkeepers leap to the left or right 90 per cent of the time, but the ball goes down the middle 40 per cent of the time. In purely statistical terms, goalkeepers would do better if they did less. So why do they dive? Because it is less discomforting to fail having made a spectacular leap, than if they stand there doing nothing. In all sorts of ways, people trade payoffs for comfort.
How can behavioural science help advisers quantify their clients’ risk tolerance?
The idea that some people might want to take more risk than others is a fairly standard part of traditional economic theory. Regulators require advisers to make an effort to measure risk tolerance. But how you figure out someone’s risk profile quickly becomes very behavioural. You can’t just ask people how much risk they are willing to take, as their answer will change depending on what they read in the newspaper that morning or what they heard at the dinner party last night. If you are trying to figure out something that is going to be the hook on which you hang someone’s investment portfolio for the next 20 years, you don’t want that something to be unstable.
This is where behavioural science comes in. There is a whole history of psychometric testing that tries to understand how we establish what is deep-seated, underlying and stable about an individual’s preferences and tolerances. Using an empirical, data-driven approach, it is possible to ask questions that validate an investor’s risk tolerance trait against background data from hundreds of other people. There are other ways to measure risk tolerance used in the academic literature, such as asking people to pick between portfolio A and portfolio B. But applying these approaches is misguided and even dangerous when applied to financial advice – these questions have been designed to test ephemeral in-the-moment attitudes, rather than long-term stable risk tolerance. They provide an answer, but not one that is stable or that answers the right question.
Could you expand on why this is a problem?
One issue is that there are many things about people which are not directly related to risk tolerance that are nonetheless extremely important to their decision-making. To give one example: you may have two people with the same level of risk tolerance: i.e. in the long run they are willing to trade off risk and return in exactly the same way. One of them is incredibly laid back, and never checks their portfolio. That person is probably costing themselves money somewhere, because they are not rebalancing or adjusting the portfolio when needed. The other person is extremely anxious, checking their portfolio too often and feeling every bump in the road.
These two people need something different. While they might not need different portfolios, they definitely need a different type of communication from their advisers to ensure they are making the right decisions.
How is technology being used in the advice industry to help improve financial decision-making?
There are three aspects to this: digital, data and design. Think of it as a Venn diagram: you have digital platforms as a mechanism for delivery of information; data that can enable you to personalise what you put in front of clients through the digital channel; and a design that makes the platform comfortable and easy to use. Then you have behavioural science at the centre to pull all of these elements together. With this combination we can build ‘decision prosthetics’; tools that will draw in data and, as a result, lead clients through decisions in a way that is uniquely and personally tailored to them.
Is this about automating decision-making?
This isn’t about removing people from the process – it isn’t ‘robo’. It’s about providing people with tools that make them more consistently their best selves. These tools can lead to fewer errors because you are presenting information to clients in such a way that they can consciously counteract their biases. A computer can crunch the numbers quicker and more objectively than a person can. This is about freeing people up to do what people are good at, such as appreciating ambiguity and nuance.
Could automated processes be used by investors to capitalise on irrational behaviour among their peers?
Something as simple as a momentum factor in investing is effectively a behavioural factor that can be plugged into a quantitative model. There are a number of hedge funds using machine-learning techniques to process much bigger corpuses of data in this way to tease out anomalies.
So can we apply artificial intelligence to investing? Yes, probably. My concern is this: one thing machines are good at doing is taking information within a defined and stable set of rules and looking for every small inefficiency humans would not see. What they are not good at is dealing with a changing set of rules. If the structure of the market changed somehow, a machine might start taking massive leveraged bets on what it perceives as anomalies, when the reality is simply that the environment has changed. If you let machines loose in this way it might result in catastrophic losses.
So in your view, a combination of machine tools and human decision making is the way forward for investors?
Systems that combine human and non-human elements can be greater than the sum of their parts. Chess is a good example. After Garry Kasparov was beaten by IBM’s Deep Blue chess computer in 1997, he started playing a new form of ‘centaur chess’ in which humans play alongside machines. To this day, a human player using a simple chess computer can achieve a level that is far higher than either a grandmaster or an AI-driven supercomputer.
If there is a single thing behavioural science can teach individuals about investing, what is it?
Investors are largely ‘passive aggressive’. They are passive because they leave far too much of their wealth in cash doing nothing for far too long. And they are aggressive because the bit that they do put into the market they constantly tinker with. We would all be better off doing the opposite: putting all of our wealth in the market and then doing nothing. Needless to say, this is easier said than done.
Originally published by Aviva on 17/5/2018.