Adviser ‘Noise’ in Retirement Income Advice

June 13, 2024
Greg

Greg

Globally recognised expert in applied decision science, behavioural finance, and financial wellbeing, as well as a specialist in both the theory and practice of risk profiling. He started the banking world’s first behavioural finance team as Head of Behavioural-Quant Finance at Barclays, which he built and led for a decade from 2006.

The FCA’s Thematic Review into Retirement Income gives the following example of poor practice:

"One firm had a 3-step process for risk profiling, Risk Tolerance, Risk Capacity and Knowledge and Experience. Each stage was based around a discussion with the adviser and there were no standard questions to guide the discussion. There is a risk that this approach could lead to inconsistent outcomes between different advisers of the firm."

This was one finding, in one firm, but it highlights a failing that’s far more common than most advisory firms realise. And it’s not just between different advisers of one firm where we find inconsistency. The same adviser can give inconsistent advice based on factors unrelated to the circumstances of the clients receiving it.

We know that adviser ‘noise’ – different advisers giving different advice to the same investors – is a big problem in advice.

But it is often difficult to know how big.

In 2021, we launched ‘Noise Audits’ for financial advice to find out.

In light of the Review, we’re republishing our introduction to these Noise Audits now.

How much decision noise is there in financial advice?

A lot, as it turns out!

Oxford Risk has launched a landmark study of human noise and inconsistencies in the advice process.

Research conducted for Momentum Investments and with The Financial Planning Institute in South Africa is the first of a number of such studies.

Human advice, whilst invaluable, is (it turns out) also extremely inconsistent. This clearly demonstrates the need for behavioural decision-support tools to assist advisers in being more consistently the best versions of themselves... every day, and for every client.

What is noise, and why should we care about it?

The most suitable investment solution should depend on the investor circumstances it’s recommended for, not the adviser that’s recommending it.

Consistency of advice is a crucial concern for any firm. If what is deemed suitable for a client can differ depending not only on which adviser within a firm they speak to, but also on the prevailing mood of a particular adviser, then that firm has a problem. Especially when we remember that advice isn’t a single event, but an ongoing relationship, and that the regulations care not whether you get it right on average, but whether you get it right for each individual.

The two main sources of inconsistency in advisory processes are an overreliance on humans, and the heavily front-loaded nature of suitability assessments.

The aim is not to turn advisors into algorithms. Humans are wonderful at many things. But they are inefficient and unreliable decision makers, especially where many moving parts are involved – as in Risk Capacity. Humans are prone to ‘noisy’ errors – unduly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather.

Noise isn’t bias. Bias is systematic: it errs the same way every time. Like a mapping model that puts every client into too risky a portfolio.Noise errs in more mysterious – and therefore less easily manageable – ways.

Upfront assessments are necessary but insufficient, and often overplayed. Suitability reflects circumstances. Circumstances change.Constantly. Sometimes changes are imperceptible. Sometimes pandemics happen.Because this is inherently complex, we are drawn towards the sanctuary of the status quo. Overemphasising initial assessments makes investment solutions over-fitted to the investor’s circumstances at that single point in time, and unresponsive to subsequent changes. They drift away from what is suitable over time. And in times of crisis this drift can become a dash.

Because Risk Tolerance is a single, largely stable, and easily quantified attribute, it is regularly weighed too heavily in determining suitable solutions. Suitability should be more responsive to Risk Capacity – especially during an investment journey.

But Risk Capacity has many moving parts. Studies on multi-attribute decision-making show that even when people think they’re assimilating evidence from all sources, they’re really just filtering down to the few that stand out. And that few isn’t consistent over time, let alone over different decision makers.

Establishing frameworks to drive consistency in diagnosing situations doesn't mean giving every client the same answer. It means those answers need to be within boundaries defined by a clear diagnosis of the problem. There are multiple paths towards remedying any situation, depending on client personality, circumstances, and engagement.

Identifying noise isn’t about eradicating inconsistencies. It’s about eradicating unjustifiable ones and evidencing justifiable ones.

It is never enough to simply tick every box and believe that because you’ve assessed, say, Risk Tolerance, Risk Capacity, Behavioural Capacity, and Knowledge and Experience in some way, that you will be giving suitable advice. The way that you assess them, and the way the outputs of those assessments are combined is crucial if the advice is going to be accurate, consistent, and documented.

The ‘answer’ isn’t to rely on subjective discussions to cover up the shortcomings of a suitability assessment. Not least because this doesn’t fit well with another of the major themes of the Review – the need for robust profiling processes that (ideally automatically) provide evidence for how each client is being assessed in the same reliable manner.

As we wrote in Introducing Noise Audits for Advisers:

"Humans simply aren’t well suited to making robust and reliable ad-hoc adjustments. Advisers should be able to adjust suitability tool outputs to take into account nuances of clients’ situations, but if the tools are well-designed these adjustments should only ever be limited, infrequent, and well-documented. If they need to be more than this, then what’s really needed is a better tool."

Subjective adviser assessments may consider all the right things, but if they're all measured differently, and weighted differently, then the answers are going to be different, and the outcomes for the client are not going to be good.

New call-to-action

Thumbnail image by Andy Makely on Unsplash.

Related Posts

Oxford Risk has launched a landmark study of human noise and inconsistencies in the advice process.

Read More