It gets harder to differentiate reality from science fiction.
Robot janitors roam our grocery aisles. Data breaches are no longer breaking news. Even China’s social credit system, which will see more than a billion citizen rated on collected real-time data of their everyday behaviour, isn’t simply another Black Mirror episode. It’s a nation-wide implementation that sanctions and governs, even if participants have no wish in partaking.
The sort of future science fiction projects isn’t usually pretty. Tales like 1984 and Black Mirror episodes forewarns us of the kind of dystopias we unknowingly enable if we don’t harness power and intelligence wisely.
But dystopias in literature are not created to be didactic almanacs. They exist as handy literary vehicles for social criticism and philosophical arguments, and a great lens to relook society and ourselves.
Truth is, no one is able to predict what the future looks like. Human ingenuity can’t be foreseen. New tools and innovations could spark further inventions and investigations. Attempts to approximate how these will be shaped by market forces simply can’t capture the multitudes and complexities of factors in play.
We can write fiction, publish strategic predictions based on research and thought leadership, keep updated on trends in all different industries and countries through the masses of voices online in the attempt to prepare ourselves. More often than not, it ends up in a sort of anxiety over a future where changes are spinning rapidly out of our control.
But guess what?
we still have control over ourselves.
We can choose what we consume, we shape our own thoughts, we make decisions based on our own assessments. We can think critically and make decisions that are informed and seek to add value. We can remember to listen and include voices that weren’t heard. We can have conversations that are transparent and productive. We can.
And why does that matter so much?
Let’s look at what’s happening in A.I..
24 hours and the power of social media was enough to turn Microsoft’s AI chatbot into a racist, foul-mouthed troll. Other products such as Google Translate and Amazon’s recruiting engine reflected gender biases that caught the Big Tech giants unprepared.
The irony of having our own past mistakes resurface so gloriously is tragically funny. But rather than shift the focus in developing A.I., why aren’t we also focusing on the roots of these issues that lie within us?
Products of machine learning are built on what we feed them with — data harvested from human behaviours. Even if we forget about the bigotry, inequality and injustice that had dominated our historical narratives, the data isn’t lying. There’s much more improvements we can make in our societies and workplaces for more inclusiveness and diversity.
Remember that these same products are made to evolve as they consume real-time data. Data that comes from how we behave both online and offline. It’s as detailed as the number of times you jaywalk or steal toilet paper'; what you talk about with friends over drinks; where you drive or walk to no matter which country you’re in.
and that’s why we have to play a part in shaping our own puppeteers.
Big Tech may (currently) have the stronghold on data and A.I.. But without us, the consumer and the consumed, they have nothing to work on.
1984’s Big Brother eked out dissidents by destroying thought. Unlike the citizens of Oceania, we still have our language.
We still have the words to shape our thoughts.
It may one day be a privilege we lose. But for now, we have autonomy and privacy over our minds. We can choose how we shape these capabilities.
And that’s something worth thinking about.
More for your reading:
Think you’re safe at the “privacy of home?”? Your very own toys are playing you.
Here’s the 2018 of 1984.
On TED: We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci.
And a fantastic way to round-up the rather pessimistic selection: WIRED on The Need For Militant Optimism.