Key points
- Never gamify at the expense of accuracy. Gimmicky games trivialise risk tolerance, they do not test it.
- Helping clients navigate complexity is better than pretending it can be cost-effectively avoided. The real returns from an understanding of the customer are preferable to an artificial understanding by the customer.
- Humans and tech perform best when they play together. Managing moving financial and emotional parts benefits from blending human and technological qualities.
Playtime is over
'Stupidity well packaged,' wrote Burton Malkiel in A Random Walk Down Wall Street, 'can sound like wisdom.' It is a lesson well-learnt, and well-used, throughout financial services, albeit not always for well-meaning purposes.
Common approaches to risk profiling have quickly gone from nowhere to temptingly well-packaged triviality, but too-often have forgotten to stop and pick up a sound scientific methodology along the way.
There are plenty of ways to do risk profiling poorly. One of the latest is potentially the most dangerous, because on the surface, it looks like a great idea.
'Gamification' – increasing user engagement by improving the user experience, specifically by incorporating techniques from games – is rightly a mainstay of behaviour-change protocols. In a complex and, to many, mundane, field like financial suitability, to not use some gamification techniques would feel like an unforgivable oversight.
However, such temptation should signal caution. Gamification is great for engagement, but the techniques alone are not enough.
At Oxford Risk, we embrace the techniques of gamification wherever we can: particularly in the design of user interfaces to enhance client engagement and experience. However, we never gamify at the expense of accuracy. The game is to enhance engagement, not sell snake oil. Gimmicky games trivialise risk tolerance, they do not test it.
There is a time for simplifying, and a time for science. For example, some 'tests' favour using the sort of individually intriguing but scientifically vacuous influence of a user's past investment actions, or even a self-assessment of their risk tolerance. Users like this because it attaches a psychologically meaningful narrative to their past actions, but academics dislike it because it adds nothing, while taking away validity, integrity, and relevance – and it can end up 'optimising' for precisely the behaviours we want to guard against.
Form should follow function, not replace it; if you're not measuring what you're supposed to be measuring, the playfulness of your polish doesn't matter.
Pretty vacant
A focus on a stylish front-end at the expense of the sort of scientifically robust substance on which any psychometric assessment must be grounded creates a Potemkin village of a process – great for show, but ultimately not fit for purpose. Form must follow function. Capturing clicks is no use without first capturing valuable, usable, client insights.
Einstein's famous (though possibly misattributed) entreaty to make everything 'as simple as possible, but no simpler' applies here. When technology tackles complexity, it tends to err on one side or the other: either technically optimal solutions with no care for user experience, or solutions simplified so far that anyone can use them, but where no one learns anything useful from doing so.
A sufficiently optimal solution sits in a sweet-spot that has a deep understanding of both the textbook solution and the behavioural traits and tendencies of its users.
Customer understanding is crucial. But helping clients navigate complexity is better than pretending it can be cost-effectively avoided. The real returns from an understanding of the customer are preferable to an artificial understanding by the customer.
Testing the tests
Being able to trust the outputs of a profiling process means being able to trust both the user's inputs (of which their engagement with and understanding of what they're doing are elements) and the methodologies that underpin the design of the assessment and its subsequent creation of the output. Trust needs to be earned with expertise, not masked with marketing.
Behavioural science has a crucial role to play in each of these steps, in ensuring correct functioning, and displaying it in a form fit for easy consumption. But the science must come first.
The quality of a psychometric test is a question of validity and reliability – that it measures what it claims to measure and that when inputs are consistent, so are outputs. Testing the tests requires a silent sophistication: complexity beneath that surface that is not necessarily evident on the surface.
An effective question set is like a team, or an orchestra: more than a mere collection of individual parts, the correlations between them count too. Picking the best team requires trials to see which elements work best together.
Suitability shouldn't stop at the start line
The complexity investors need to navigate is a function of the number of moving parts involved. Because investment markets move around more than an investor's relatively stable willingness to trade off the chance of bad outcomes for good ones (i.e. their risk tolerance), traditionally most attention is paid to risk tolerance and the wider suitability process at the outset.
However, over the course of an investment journey, it is the moving human parts – a panoply of behavioural reactions – that are arguably more worthy of attention. Suitability is dynamic; it suffers when seen as a snapshot.
Many aspects of technology are insufficiently creative because the client 'profile' is considered as only an onboarding issue, segregated from the reporting and relationship management that influence investor-investment interactions through changing circumstances.
Humans do not turn into robots when they start to own investments. Siloing risk tolerance into a bucket of onboarding chores leads to suitability and client-satisfaction risks and lost opportunities in sales and engagement because of a rushed, and incomplete approach to client attitudes. Just because a transient behaviour shouldn't be baked in to an investment solution, it shouldn't be ignored in deciding how that solution should be presented and managed over time. The person that makes a plan is rarely the person a plan is made for, whether that's the alert and inspired future gym-goer of New Year's Eve turning into the tired and emotional duvet-hugger of New Year's Day, or the calm investor sitting with an adviser for an hour turning into the confused one reading the news six weeks later.
Simple, but not simplistic
Too often, to borrow a phrase from historian Will Durant, 'The fertility of simplicity defeats the activity of intelligence.' Engaging investors with the profiling process is vital, but if it's done at the expense of competently measuring what you need to measure, then it's both dumb and potentially dangerous. Regulatory risks rise as the seriousness of testing investor attitudes to risk falls.
Forgetting what you're trying to do and why, in favour of how you're doing it, is a common error when designing shiny new technological toys.
Humans and tech perform best when they play together. Managing moving financial and emotional parts benefits from blending human and technological qualities. Humans are good at some parts of the suitability process. Tech is good at others. They each have distinct, complementary, roles to play. Tech should be leveraged to help humans navigate complexity, not add another layer of it, or become an end in itself. As simple as possible, but no simpler; beware both the simplistic and the overengineered.
Tech offers the opportunity to produce rich and accurate financial personality assessments at scale, that in turn can be built in to hyper-personalised approaches to engagement, communication, portfolio construction and reporting. Well-designed digital platforms deliver information to clients that is personalised, easy to use, and shaped by their behaviours. By taking the legwork out of the profiling process, tech can save human energy for appreciating the ambiguity inherent in its interpretation.
But it can do this only when being good is followed by looking good, not replaced by it.
A version of this article was originally published by WealthBriefing on 1/4/2020 as part of their research report, Technology Traps Wealth Managers Must Avoid.