I open a window, take a deep breath and — jump.
A thousand notifications hit my screen, coming in thick and fast as the browser loads. Unread messages, news stories, sales notifications, app updates, and a flood of social media interactions numbered in candy-coloured bubbles. When will design catch up with experience, giving us skulls and crossbones instead of bubbles and hearts?
3,242 gut-churning new messages. New York Nuclear Explosion in Pictures! Self-Driving Car Plunges Into Crowd. Government to Implement Cannibalism Measures. President Incarcerated! President Awarded Nobel Prize! Immigrants Given Mansion with Pool. All of these, I know, will link to footage or photographs generated to match. Some try to direct me to videos of celebrities doing compromising things. A few claim to have videos of me doing compromising things — deepfake blackmail, to be deleted without reading.
From this glut of content I fish out the information I need, pay a bill, send a message. It doesn’t take long: the messages write themselves, text auto-flowing faster than I can think.
Even a picture of my breakfast will set someone off, but I post it anyway
Deep breath, and – social media time. It’s like stepping into a polluted stream; I don’t know where the currents will take me, but there’s sludge in every direction. I have to keep up a minimum presence, and that means running the gauntlet of hateful messages that seem to pop up on every post. Orange juice, fruit, granola – even a picture of my breakfast will set someone off, but I post it anyway. I’m already being watched, already on the wrong side of the people in charge.
I was 19 that summer, and like any teenager, spent five hours a day on my phone. I wasn’t sexting or sending selfies: my friends and I were the most politically engaged young people since the 20th century’s wars. We were fighting for ideologies, ways of life, futures for families we feared we’d never have. We were trigger-happy trolls or warriors, depending on your perspective, ready day or night, rain or shine, to share, shame or block. Mostly we were angry, scowling permanently through the light of half-smashed screens that our zero-hour contract jobs didn’t pay us enough to insure.
Technology ensured that nobody would ever have to change their minds
It hurt me to see people hurting each other, putting so much anger into air. But I also thought a lot of problems could be solved if people valued the truth more than “being right”. There were good reasons to be angry, but a lot of bad ones too. Technology had sped up at just the right time to ensure that nobody’d ever have to change their minds again, with the rise of eFakes making it easy to create evidence for anything.
I spent a lot of time that year examining photos, videos and stories, posting on sites that tracked eFakes as they spread. I read articles across the political spectrum, trying to understand why people believed one thing and not another. I waded into online battles armed with information. I made the case for common ground. After a day of packing delivery bots, it was something to do. And in spite of the insults and death threats, I liked to think it made a difference.
The news story fanning the flames that day showed a group of immigrants beating up an old man in a park. The continuity jump showed it was a fake, and a crude one, though this didn’t stop the flow of xenophobic comments. I was about to post a reply, pointing out the jump in what I hoped was a convincing but compassionate way, when something happened to the screen. The webpage began to move and flex. All the angry words on the screen came tumbling down and fell out of view, like ants being shaken off a picnic blanket. The screen, emptied of content, turned black. A flourish of multicoloured particles swirled and darted, forming the words:
WE ARE THE MYTH UN-MAKERS WE ARE THE DREAMERS OF DREAMS.
The dreamy animation faded away, replaced by a text that went on, in a less theatrical style:
We believe in the possibility of a fairer society, created by AI and you.
Would you like to be part of our team?
Underneath that was a field for giving my name, email, and phone number, and something else - a symbol. A double-headed arrow, like the symbol in chemistry that means a reaction has reached equilibrium.
I’d heard of something like this before. A friend of mine had been researching some specialised programming application one day when the internet “split in half,” as he described it, and a pop-up appeared, inviting him to take a challenge that would double as a job application. He’d done the challenge, and, miraculous as it seemed to the rest of us, gotten a job at Google.
So I went down the rabbit hole, filled in my details and crossed my fingers. I hadn’t been looking for a job – like most of my friends, I found it hard to imagine there was one within my reach that wouldn’t be soon automated out of existence. But if there was a way to something better, I’d pay attention to the exit signs.
The reply I received – just a date, place and time – gave nothing away as to where those signs would lead me.
The office was on the 14th floor of a corporate block and had a sweeping view of the city. There wasn’t anything special about it, or about the receptionist who took my name, or the man who shook my hand and introduced himself as Michael – I’ve forgotten his last name, though I’m sure he gave me that too. The symbol I’d seen before, the double arrow, was on the door, but there was no other clue as to the nature of the company. Laptops, scattered on tables, had no one working on them.
“We’ve only rented this office for a short time,” said Michael, waving his hand at the empty desks, “because we’re constantly on the move. But I can assure you we’re legit. You can find our records online. Of course, you don’t yet know what to look for… Welcome to the temporary HQ of the Fair Council Initiative.”
He ushered me into a room, closing the door behind us.
“Our task force works remotely from around the world. We’re a new NGO, involving eleven countries, though our aim is to have every country represented in time.”
The FCI, Michael told me, had been dreamt up by an eccentric billionaire, a tech entrepreneur who’d convinced others in his circle to invest. The organisation was led by a panel of policy-makers, psychologists, communication experts, semioticians – I don’t remember all the details, but that kind of thing. Their aim was to create a global Council of moderators tasked with defusing hostile arguments and encouraging reasonable debate online.
Dreamers of dreams, I thought. I was a bit disappointed. I’d imagined something more subversive. An underground project, involving some exciting, dangerous technology.
“We have the best developers in the world on board, helping to create this…” He showed me a slick interface showing an interactive cloud, where keywords floated next to viral headlines. “It’s a sort of heat map of the internet, trawling through thousands of conversations per second, across all the major platforms.
“Hate speech is flagged – you see the red dot here? – but it’s not necessarily prioritised. Thanks to the vast number of conversations we’ve analysed, we can pinpoint the moments in a conversation when the right response, dropped in at the right moment, can diffuse anger or change someone’s mind. The system shows moderators where their work will have the most impact, which isn’t always where you’d think.”
We gazed at the cloud, which pulsed with red circles. For every one that vanished, two seemed to take its place.
“For humans, the task would be insurmountable. Even with these tools. But AI gives us the chance of making a real impact. All of this data – it’s also a training ground. We’re training a team of AI moderators – Fair Arbiters, we call them – who can work across hundreds of platforms at once.”
“You’re making an army of bots to spam the internet with Keep Calm memes?” I said.
“In a way. ‘Army’ and ‘spam’ are words we prefer to avoid, and ‘bots’ doesn’t do them justice. These agents are capable of learning, and of making sophisticated judgements. That’s where you come in.”
Michael tapped on the screen and scrolled backward through what looked like a timeline. “Historical data. Here’s a conversation you were having last week. We can see which posts were effective as areas become less red, meaning people cooled off or changed their minds, thanks to something you wrote. You weren’t aware of it, because you’re only human, and you can’t hear everyone at once. You can only hear the ones shouting the loudest.”
I digested this with mixed feelings. Gratifying to think I hadn’t been wholly wasting my time. Disconcerting to find out I’d been an unwitting part of their analysis. But these were public conversations, and someone was always watching, and this FCI, whatever it was, seemed to have good intentions.
“So what happens to those people? The shouting ones?” I said.
“They keep shouting.”
“You don’t do anything?”
“We don’t interfere with free speech. We just–”
“Amplify moderate voices.”
He smiled and passed me a piece of plastic, a keycard with a string of numbers at the top.
“So that would be my job? Doing what I do now?”
“In a more official capacity. You’ll commit being as impartial and fair as you can. You’ll do your best to diffuse harmful biases without bringing in any of your own. And you’ll be training a Fair Arbiter – a sort of copy of yourself, who can amplify what you do across the internet.”
He tapped the piece of plastic. “Imagine an AI twin sitting on your shoulder, looking at all the conversations you have. Every time you share, comment, agree, disagree, or change your position, your twin is learning. This is the equivalent of that twin. It’s the key for a sort of spyware that lets us capture the information we need. With your consent, of course.”
“I don’t know,” I said. ‘Spyware. An AI clone? Not at all creepy.”
“Don’t think of it as spying, but learning. And don’t think of it as a clone. Your twin is both you and not you. Each physical asset – that’s you, and all the humans working with us – has an artificial twin, and each twin is continuously sharing its knowledge with all of the other twins. In a short time we’ll have a system with the pooled knowledge that would take humans centuries to share.”
Here it comes, I thought. The crazy, subversive idea.
But Michael only said, “And if you still feel worried, remember, the aim is just to influence people. No one’s getting sentenced, no laws are being created. This is purely about the power of persuasion.
“You’d be making the internet a kinder place, and getting paid. You can work from home, and you won’t have to pack delivery bots.”
“I’m in,” I said.
The two men sitting on my sofa looked at me sternly.
“So you say they installed this spyware on your laptop?”
“I installed it myself.”
“On the grounds that you were part of some world-changing dream team?”
“Tech companies have said stranger things to recruit people.”
“How many people were working for them?”
“I have no idea. I never met anyone else.”
“And what do you think they’ll do with the information they’ve collected?”
“Why shouldn’t they do what they promised?”
For five months, every Monday through Friday, I’d logged in, checked the tracker, and received a morning briefing. The briefing showed me the day’s news stories flagged as controversial, and its best guess at their truth percentage. I’d check this against what I knew about eFakes – the agents were still learning to detect them – then move on to anger analysis. Each story had a list of keywords revealing how people were responding, and a list of points that the system had determined would be most effective in influencing subsequent discussion.
After the briefing, I’d enter the cloud, scout around the red dots – virtual arenas of war – and jump into action. At the end of each day, I was rated by my AI twin, which showed me where I’d been most effective and where I could have done better, based on data shared by all the others.
Sometimes I was prompted to rate some actions which I assumed were the efforts of the Council itself. These were little glimpses of some larger plan, and though I couldn’t get a sense of its direction, scope or success, I began to understand the “crazy” idea at its heart. We were the Fair Council – this whole system of checks and balances, shared knowledge passed between humans and AI. The Council acknowledged the biases already present in our systems, amplified them and then fed them back to humans to correct. We were closing the circuit between human and machine.
Online, people gravitate toward the most extreme expression of their beliefs because the internet made it seem those opinions were more widely held than they really were
And it worked. As I improved, so did the recommendations in my daily briefing. Where conversations would once have spiralled into name-calling and label-slinging, now they stayed on topic. People were slower to anger, more likely to listen. As the AI arbiters went to work, the calming effect spread quickly, thanks to a crucial insight of the system: most people don’t think of themselves as extremists. Online, they gravitated toward the most extreme expression of their beliefs because the internet made it seem those opinions were more widely held than they really were. We didn’t need to change their worldview completely. We just had to make them think that more people believed something else.
“Were you aware that the information you gave could be used to generate propaganda?”
Near the end of that five months, as extreme content began to disappear from all the major platforms, I found myself wondering how many contributors were now Fair Arbiters, arguing gently with each other. Had human voices been crowded out? There was less outcry than I’d expected when the government overturned some legislation on abortion rights, once one of the most hotly contested topics. How much dissent did democracy need?
“Were you aware that the system you were training to detect fakes could be training another system to generate them?”
I had noticed that eFakes, perhaps in response to my work, had become subtler both in content and the political persuasion they tried to impart, which made them harder and harder to detect. Increasingly I had to defer to the judgement of the system as to which were true and which weren’t. But the system saw what I could not see. And the system was in the hands of AI invested in making the internet a more humane place.
I opened my laptop that morning to find it had been wiped. Reset back to factory settings, all data lost, presumably by remote means. There was no trace of the work I’d been doing. Then these two men had shown up, with as many questions as I had, plus some additional opinions. The FCI are foreign influencers. Their activity, however idealistically packaged, is suspicious. They enlisted thousands of idealistic kids who let themselves be spied on. The data they’d received could be used to spread the wrong kind of information, and influence people in the wrong sorts of ways.
It’s better to do nothing than risk helping the wrong side.
Orange juice, fruit, granola. My breakfast post has 25 new comments. I dare to look at the first, which tells me that granola causes blindness and tries to sell me prescription lenses. If they’re not trying to turn neighbour against neighbour, they’re trying to sell something. The usual spam filters have been failing against the influx.
After the questioning, the men left me alone with a warning and an injunction to let them know if I had any further insights. That was a year ago, and I have no insights, only theories.
Maybe the FCI, or another organisation like it, was some foreign government’s intelligence operation, as bizarre and high-concept as that seems.
Or maybe something went wrong with the system, the AI agents multiplying beyond the FCI’s control, generating the “grey goo” equivalent of fake news and hate speech, optimised by all our training to create the most polarising content.
Maybe this current government isn’t the type that thrives on peace.
Maybe I was a part of something that could’ve reset this broken system, put it back on track to becoming a place of free speech and shared knowledge.
Or maybe I played a small part in making a world in which no one will ever know what’s real.