Engaging with the internet in the age of global anxiety
Mental health is a funny thing, isn’t it? It’s something everyone knows is important yet few seem to really prioritise.
A mental issue isn’t quite like a physical one. If we break our arm, we rush to a doctor who will quickly put us in a cast and tell us to avoid exercise for a while. Mental health, however, is invisible. It takes self-awareness (or someone close to you) to identify something might be wrong, and courage to seek help. Then, once you’ve sought help, active work to manage and overcome it.
There are endless contributing factors to mental health and one can’t sustain a productive life by sheltering from all of the things that may negatively impact it. That said, we know there to be certain things that put disproportionate pressure on our mental wellbeing, yet we’ve been convinced bit by bit to integrate them deeper into our lives.
I am, of course, talking about digital technologies such as social media, Netflix, Google, Amazon, and so on. The world is quickly becoming an endless stream of distractions fuelled by technology that is far outpacing our ability — or even willingness — to control it.
The movie The Social Dilemma is a brilliant and candid insight into the power social media has on society and the implications this carries for the mental wellbeing of its users. More worryingly, it is a scary peek behind the curtain of attention capitalism. If you boil it down, the single metric each of these companies really care about is the portion of your time you dedicate towards them. It is this attention (or ‘engagement’) they monetise, be it through advertising, subscriptions, or sales, and it is this attention they manipulate to keep you in the viewing vortex.
Tony Robbins likes to say, “energy flows where attention goes”. This might be true for those moments where we are fully present and have full agency over our actions, though these moments are becoming fewer and farther in between. In the world we live in, money flows where attention goes.
Roughly 1 in 4 people in the United Kingdom are diagnosed with some sort of mental health issue each year, with 50% of mental health problems in adult life (excluding dementia) taking root before the age of 15.
With the lion’s share of adult problems originating in childhood, it’s worrying to think 75% and 69% of teens in the United States use Instagram and Snapchat respectively, and nearly all teens age 13–17 (95%) own a smartphone. A completely unregulated channel through which these tech companies can influence the more than just the buying patterns of a generation.
Since living in London, I’ve become increasingly aware of food allergies and intolerances as packaged foods are required to carry allergen warnings and restaurants are required to ask everyone if they have any allergies. All of this even though only 1% — 2% of the UK have a diagnosed food allergy.
This made me think, do we really take mental health issues as seriously at a societal level as physical health issues? Is the potential of suicide treated as seriously as peanut-induced anaphylaxis? Is depression or anxiety given the same platform and treated with the same openness as bloating or a rash?
The food, beverage, cosmetics, and pharmaceutical industries are all regulated, requiring rigorous testing and clear labelling before any products can be distributed. Technology, on the other hand, is the Wild West. Moore’s Law and all it enables far outpace anyone’s ability to foresee and control for negative externalities.
So, what if tech was regulated and had to go through the same rigorous testing and labelling process as other industries?
What if Facebook could only change its feed if it disclosed test results for spreading misinformation or stifling diversity amongst a representative group of its users?
What if Instagram had to test whether infinite scroll or any of its other features drove addiction before releasing them to their entire user base?
What if Twitter had to label each of its posts with a breakdown of the relevancy, bias, or currency of the information?
What if Google had to disclose what criteria it analysed to determine the order in which it displayed results for each individual search?
What if Apple had an obligation to warn people of the potentially harmful effects of each of the apps in their catalogue?
After all, one can’t walk into an alcohol store without being reminded to drink responsibly.
Customer Journey Maps (CJM) are tools that most companies use to understand the buying and usage patterns for each of their key customer types (personas).
As they are a business tool, they are generally focused on understanding the potential steps along a journey that may cause a person to decide against being a customer, solving for them in advance. The goal of a CJM is to hack the buying experience to ensure as soon as someone becomes aware of a problem, the outcome is the purchase and usage of Product X.
Customer journeys are generally thought of in the following sequence:
I believe the initial two phases of the customer journey are the important ones to focus on for the time being, as they’re probably the easiest to solve for. At the very least we’ll be able to arm children signing up for social media accounts with the information they need to make an informed decision, hopefully reducing the horrifying suicide numbers that can be correlated with social media usage.
The easiest way to solve these phases is to treat App Stores like places we buy alcohol or gamble, and websites as product packaging.
In physical stores we have signs telling people that smoking is harmful and in gambling dens we have copious messaging reminding people they have families. Similarly, you can’t buy a box of granola without it carrying warnings about it containing nuts or gluten.
This would ideally create a level of awareness going into the decision phase. Of course people still have the ability to overlook these warning signs, as smokers continue to due each time they restock, though the likelihood of following through with the purchase is reduced with regular exposure. Similarly, there is potential for the purchase funnel to carry similar warnings, though this is potentially harder to administer.
The usage phase is the really hard one to fix, as the genie is already out of the bottle. Tools such as Apple’s Screen Time are useful in setting usage limits, though the reality is that these are opt-in, rather than opt-out.
An opt-out approach might require companies like Instagram, Facebook, and Netflix to replace infinite scroll or auto-play with alerts of total meters scrolled or daily accumulated watch-time. These simple nudges may be enough to trigger awareness of a different problem, hopefully starting the customer journey to recovery.
Nudge theory suggests that the best way to change behaviour is by using arguments that have emotional resonance or are linked to identity and beliefs rather than with rational facts alone. Perhaps the suggestions I’ve made above won’t be adequate in completely changing behaviour patterns, though it’s a good place to start.
Though I consider myself an optimist in life, I can’t help but feel some deep concern for the trajectory of society as a result of big tech’s exploitation of our unquenchable thirst for distraction.
There are simple regulations we could instate requiring warnings in the awareness and decision phases of the customer journey, and potentially interaction-based nudges that would arm people with information to make better decisions, as well as awareness of their subconscious behaviour patterns.
An idea I’ve started exploring with a few people is a chrome extension that embeds these subtle nudges within some of the platforms discussed above. We are hoping for it to be a tool for us to start experimenting with awareness.
We are going to spend more time exploring the usage phase of the customer journey, and would love to invite anyone interested in exploring this with us to reach out at firstname.lastname@example.org or find me on Twitter.