top of page
  • Writer's pictureMatt Morgan

The Two Realities Problem

When was the last time you had to tick a box? We’ve noticed that it’s almost impossible to avoid, whether it’s downloading an app, online training at work, or using a website, we’re constantly being asked to agree to T&Cs, confirm we’ve read a policy, or give consent to cookies. We’ve noticed that when this is happening we’re often creating two realities. The on-paper reality, where 100% of employees have read and understood a critical policy, or every app user has agreed to the T&Cs; and the real world reality where the vast majority of us have ticked the box, but not absorbed the (often vast amount of) information. This creates a gap between the well-intentioned action of the app provider, webpage owner or company seeking to ensure they are compliant with important regulations, and the day to day reality. This is a huge issue as we may think we’ve solved a problem like data protection or ensuring employees follow a legal process but, while we have the appearance of a solution (on paper), in reality it’s possible the problem remains. 


The Two Realities

Two realities are created when we design solutions making some easy-to-make but incorrect assumptions about people. These assumptions were brought to light by the field of behavioural economics and most famously by the late Daniel Kahneman. The assumption we often make is that people are rational and logical and, if they’re given all the right information, they will make the ‘right’ or ‘best’ decisions. This is ‘economic’, rational thinking, personified by Kahneman as ‘Econs’; essentially Spock-like robots that follow clear logical rules all the time. However, people are in fact Humans! They’re sometimes irrational and their decision-making is sometimes influenced by things completely unrelated to the subject of the decision, like how hungry they are, or how a statistic is presented. It’s not to say that humans can’t be rational or make logical decisions (they do in lots of situations), but that they are also irrational and have a cornucopia of biases that can help or hinder depending on a myriad of complex variables that are almost unknowable. In a nutshell, the Two Realities problem occurs when solutions are designed for Econs, not Humans.



We’ve been spotting examples of the Two Realities recently - once you start seeing it, it’s hard to stop! Here are three examples where well intentioned interventions that aim to provide people with important information to make well informed decisions inadvertently creates the two realities problem.


Policy compliance

Read and understood. 100% compliance. Everyone has read the policy, is now enlightened about its contents, and will abide by it. Well that’s what it says on paper anyway. In almost every large organisation, workers are required to comply with an array of policies ensuring that the organisation doesn’t fall foul of regulation and laws. This is clearly a great aspiration - to get everyone fully understanding the way things should be done. Except how many employees have actually read the policy, let alone understand it? They’ve ticked the box but, frankly, it’s the fifth one that month on a topic that isn’t exactly central to what they do and it’s twelve pages long, with small font using legal language. So is it 100% compliance? On paper, yes. But in reality, perhaps only a small percentage have given it more than a cursory glance before ticking the box. 


It’s an essential aspiration for a company to ensure their people will abide by their policies (eg. conflict of interest, hospitality, cyber security) to make sure they abide by the law and do the right thing. The wrong assumption is that if we just get everyone to read the policy, they’ll behave in accordance with it. ‘Let’s send them the policy with a tick box to confirm they’ve read and understood it’ is a sentence I’ve heard too many times. This could easily get 100% compliance on paper, but would people act in line with the policies in reality? Unlikely if we know anything about actual human behaviour. 


Patient Information leaflets

Patient Information (PI) leaflets are those tightly folded up slips of paper that come with any medicine. The sale of medicines in the UK and Europe is highly regulated and every medicine is required to come with a very strictly controlled set of information on this leaflet by law. In the UK, the regulator states that they are for patients and should contain ‘high quality information to use in their decision making’. Here’s a question for you. Have you ever unfolded one and read it? Okay, more than one? No, me neither. They’re normally tiny pieces of paper with incredibly small font covering every part of the paper. If someone was to read it, it would inform them of the side effects of the medicine using probabilities of a side effect occurring. 


PI leaflets are a great example of where the intent of sharing the information is really positive - to give people important information about the pharmaceuticals they’re about to ingest - but the solution is ineffective because of the ‘Econ assumption’ in two significant ways. First, the method or format means that only a very small percentage of people will actually take the PI leaflet out of the packaging and unfold it, mainly because it’s very small with small font, signalling a lack of importance to us Humans. Second, the content provides information but it doesn’t actually help with decision making. Many behaviourial economics experiments involve the different ways we present probability and statistics and show that we find it incredibly hard to make inferences based on those probabilities. Think how useful is it to know there’s a 2% chance your hair will fall out if you take the medicine when there’s no way it can tell you the risk of not taking the medicine (eg. if you don’t take any medicine, you’ve got a 100% chance that your toes will fall off!). 


It’s a brilliant intention, to help people make well-informed decisions about what medicines to take, but the solution has been designed with Econs in mind, rather than taking account of actual human behaviour. 


Data consent 

I do this almost without thinking now - land on a webpage and hit ‘accept cookie selection’. Every single website must ask for a user’s consent for a cookie to be downloaded to their computer or phone. A cookie is a file that is downloaded to store information about a user’s preferences or past actions. Some are essential for security and consent from users is not required. Others require consent to be given, like cookies to provide the website owner some useful analytics that track what pages you go to, how long you spend on them and what you click on, and those that help with providing more personalised advertising.


The data regulator in the UK says that website creators need to ‘be confident that your users fully understand that their actions will result in specific cookies being set and have taken a clear and deliberate action to give consent’.  So whenever you download an app onto your phone you have to tick a box saying you’ve read the terms and conditions. Much of the T&Cs are about the data that the app will harvest and the reasons behind it. The T&Cs are 8 pages of tiny text in legal language. What do we as humans (rather than Econs) do? We tick because we want that app, and preferably now! 


In both scenarios, it’s incredibly important in today’s (and tomorrow’s) digital world that individuals take responsibility and have control over their own data. Regulations like GDPR in Europe are there because they recognise how important this is and that people need protection when there’s the option to give away data to a vast number of organisations with very little knowledge of what they’re doing with it. This is even more pressing in the AI age where this data could be used in ways that have in the past been unimaginable. 


The intent to protect people’s data and to give people an opportunity to provide consent is great. But does the current solution really work? Our guess is that the vast majority of people just tick the box or click ‘accept’ to get onto the webpage they need access to. Which means that on paper, people have had the information they need to make an informed decision and the organisation is now compliant and safe from legal sanction. But in reality, the ultimate goal of ensuring the user is able to make an informed decision probably hasn’t been achieved. 


Designing solutions for Humans

Spotting when we’re in danger of creating the two realities can be the start of finding different ways to make sure we design solutions for humans, not just making the information available, but ensuring that the information really helps humans (rather than econs) make informed decisions. As with many disruptive approaches, it is unlikely to be easy, but spotting the two realities is a great place to start.


We’ve loved Kahneman’s classic Thinking Fast and Slow, along with Daniel Ariely’s Predictably Irrational for a quicker read. Here’s to observing human behaviour and designing solutions that really work and tick the box, rather than just ticking the box. 



 

little BIG idea

A little summary of this big idea using the 1000 most common words

Some things make sense on paper but not in real life so we need to think about what people are like when we are trying to fix things.

45 views

Recent Posts

See All

Comments


bottom of page