This post originally appeared as part of Forum for the Future's 'Sensemaking' series and forms part of BoraCo's BreakOut work.
“..There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?” - David Foster Wallace, This is Water.
Sometimes, the most obvious and important realities are the hardest to see and talk about. Digital technology, now ubiquitous in our daily lives, is perhaps one such reality. These technologies have fully reshaped the way we communicate, how we access information, and how we work. Sometimes in ways we envisioned... and sometimes not. The reality of these unexpected consequences bears interrogation.
Practical decisions about how technology functions in our reality are usually driven by straight-forward business aims - for instance to maximise clicks or automate tasks to increase efficiency, thus sometimes eliminating the need for human application. But the ripple-effect of these decisions can impact far beyond the maximisation of ad-revenue or basic business efficiency. Some of these decisions may even be hard-wiring short-term business thinking into technology or service design, such as setting precedents for ‘paid for’ private police services, or creating AIs designed to protect their owners’ interests over those of others. Given the challenges being levelled at today’s prevalent neo-liberal idealism and questions surrounding Silicon Valley’s individualistic culture, baking today’s ideas into tomorrow’s technologies could be problematic for pioneers of social change.
Widespread use of AI is imminent, yet the current digital ecosystem has been likened to the ‘wild west’, and proven susceptible to gaming for political purposes. To effectively and fairly harness revolutionary technologies we will need to create coherent ethical governance structures for ‘online’ activities. Right now, that seems a long way away: on one hand, we have brands, government and regulators reacting to today’s problems, and on the other, academics and futurists looking 20 years ahead. Good solutions will require a joining of forces across all sectors. That’s why we’re putting on BreakOut, an event designed to bring people together from different backgrounds to ask not how, but why we are accelerating this digital reality? If it is to help society develop for the better, then what principles, codes or stewardship are needed to blueprint ethical codes for a digital age.
A FOCUS ON: ACCESS TO INFORMATION
Gone are the days where a single newspaper editor commissioned and curated the headlines. “News” and information is increasingly served to us via social networks and search engines, designed to optimise ‘clicks’ over delivering balanced content. Algorithms prioritise content that will engage over the genuine enlightenment or education of the receiver. Publications themselves, whose very survival now relies heavily on ad revenue, sensationalise copy and headlines for the same ends. This desire for clicks is serving us things that are shocking, entertaining, or designed to appeal to our biases. Sadly, the content we see is thus more likely to be misleading, antagonistic, or straight-up political propaganda.
Even when we try to access articles independently, the system is wired against a well- intended neutrality. Search engines can be gamed, meaning that information we find or try to access neutrally, may have been optimised for campaigning purposes rather than relevance, or worse, censored. Progress has been made, but one has to question the opacity in decision making processes within companies that are essentially our gateways to the internet, particularly with censorship one of the hot topics of tomorrow.
We’ve heard that this shift in our access to information, and misinformation, has intensified echo chambers and fed into our inherent biases. That it has fostered the growth of different forms of extremism, and the polarisation of societal values and worldviews. Whether that is actually the case, is open to interpretation. But what has ultimately enabled this situation is an increasing lack of user ability to clearly decipher whether the content served is balanced, factual and diverse. Fact checkers and Facebook initiatives attempt to place the burden on the individual to check the quality of the content they are consuming. But the platform user is “logged-in”, and thus trapped within a system that, based on their personal data, seeks to deliver content it thinks they “want” to receive.
A FOCUS ON: ADVERTISING
Meanwhile, the digital advertising eco-system is so complex and opaque that ad-spend from big brands and even government departments has inadvertently appeared alongside terrorist content. Worse, its current incarnation supports serious fraud, such as spoofing, which may fund cyber crime and extremism.
There is a great deal of good work being done to improve the situation. We are seeing drives towards more effective and unified standards for verification and measurement, advertising whitelists, and noises surrounding the potential of Blockchain to increase accountability.
However, the fact is we know ad-fraud is happening, but we often don’t know where or when it has happened. Ultimately, we don’t know for certain where our ad-spend is going, which represents a big problem for brand safety. It will take a concerted and coordinated effort to achieve a transparent, verifiable and accountable digital ad-ecosystem. But the benefits will be a better ROI for advertisers, a better offer from those on the sell side and hopefully to eliminate the ‘bad actors’ whose behaviour currently drags everyone operating in this area into the mire.
HOW DO WE RESPOND?
Brands have, to date, escaped ruinous consequences from misplaced advertising. Social networks and search engines have also avoided serious reputational damage, implementing individual initiatives as a reactive response to specific issues. Where possible, people blame ‘the algorithm’, as if automation was outside of the sphere of human influence. Action taken has generally been unilateral and reactionary, designed to protect brands and counter criticism. There has been proactive action, but we’re certainly not seeing a lot of joined up industrial collaboration. So far, the brilliant initiatives that give us hope still mostly lie within the grassroots.
The most obvious criticism of a reactionary approach from business is that their actions appear to be PR moves, “bandaid-ing” over real issues, discrediting claims around social responsibility, and tarnishing reputations further. However, there is a more important reason that this approach is inadequate: direct impacts on businesses are mere tremors compared to the more seismic changes being wrought on our societies. The confusion of our ‘post-truth’ world is hastening a decline in trust in our institutions (including brands), which has driven political upheavals that deliver great uncertainty, and the very real possibility of economic damage. It is these political changes that are the real threats to businesses and society. By addressing the issues that are eroding trust, such as problems with the digital ad-ecosystem, and flaws in our access to information online, we can solve our part of the problem.
Industries using reactive methods are attempting to respond individually to complex and multi-causal systemic issues. Acting alone they cannot possibly succeed. It’s no wonder they’re feeling the heat.
Responding to these systemic risks will require systemic responses. By definition, such responses must be multi-actor and cross-sectoral. Proactive change on that sort of scale can, broadly speaking, be enacted through two approaches: through top-down legislation and regulation (likely to be developed with lobby groups or reactive to societal events); or by a group of stakeholders working collaboratively. No individual organisation or actor will have all the answers (not even Google).
The imperative to act now to address present day issues around access to information, and the digital ad-ecosystem, is clear: AIs’ rise to prominence in day to day life will drastically change the nature of work, wellbeing, and potentially friendship. It will need to be carefully understood, and managed. A proactive, multi-stakeholder approach will be needed to ensure that the introduction of such epochal technology runs smoothly and for the benefit of society at large. Involving and representing diverse groups, organisations and individuals, to together understand and address our current digital challenges, will be the precursor to being able to do the same for the even greater changes ahead.
WHAT WE AIM TO DO
BreakOut will provide the space, inspiration and impetus for attendees to start to become conscious of the wider implications of the day to day decisions they are making. To meet others with whom they can collaborate and tap their collective intelligence and identify key areas to explore.
Essentially, it aims to brings together those thinking 10 years ahead with those thinking 10 days ahead and seeks to find common ground. It is the first step in what we hope will be a longer term journey to allow organisations to identify, understand, and respond to, systemic issues in the digital world.
With the widespread adoption of AI looming, we must learn to govern the digital sphere now, or we will never be able to manage the development of this incredible technology in an ethical fashion. New code must be informed by new codes of ethics, our mission now is to define our vision for a positive future…. And technology’s role in helping us to achieve it.
Sensemaking by Harriet Kingaby, Jen Katan and Neil Young