Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, August 15, 2024

Critical design fiction and the Torment Nexus

The Routledge Handbook of AI and Literature, edited by Genevieve Lively and Will Slocombe, will be out in December. I've got a chapter in it, offering a thunderously gloomy assessment of Kai-Fu Lee and Chen Qiufan's AI 2041: Ten Visions of Our Future. I don't mean to be like this, I obviously fell into a cauldron of Adorno when I was a baby.


The chapter also offers the concept of "critical design fiction." Chen’s intro to AI 2041 suggests that science fiction has the ‘capacity to serve as a warning’ but also that ‘every future we wish to create, we must first learn to imagine.’  Critical design fiction is my attempt to somewhat formalise and test this idea. Critical design fiction would be fiction which:

  • adjusts the probabilities that what it represents will occur, 
  • if it had been written differently, it would have adjusted those probabilities differently,  
  • decreases the probability of something it represents occuring, OR could have decreased its probability if it had been written differently. 

By ‘what it represents’, I'm talking about the general type of events or states of affairs, rather than the specific details of what the characters do or experience. So although it brings some precision, there is plenty of room for interpretation and contention. Similarly, I haven't tried to formalise the difference between 'the same story written differently' and a 'different story.'

Why am I so interested in 'decreasing the probability' of things? Critical design fiction is in part a response to science fiction's 'Don't Build the Torment Nexus' problem. 


The chapter, by the way, got far too long, and so half of it budded off into a separate essay: 'Machine Learning in Contemporary Science Fiction,' over at The SFRA Review.

Massive thank-you to Genevieve and Will for their thoughtful and patient edits. The AI and Literature handbook has a formidable academic publisher price, so this is one to ask your library to order. If you are desperate to read something in it and can't get access, let me know and I'll see if there's anything I can do.

Tuesday, July 4, 2023

AI Literacies and Media Education

 A sort of mini-roundtable, thrown together quickly in response to ChatGPT and other generative AIs: AI Literacies and Media Education.

Tuesday, May 30, 2023

Why isn't science fiction interested in AI?

Since the century began, there has been a remarkable surge in AI research and application. This has mostly involved AI of a particular kind: Machine Learning (ML), especially Deep Learning. In brief, ML tends to place much less emphasis on carefully curated knowledge bases and hand-crafted rules of inference. Instead, ML usually uses a kind of automated trial-and-error approach, based on a little statistics, a lot of data, and a lot of computing power. When we hear of AI transforming journalism, healthcare, policing, defence, finance, agriculture, law, conservation, energy, development, disaster preparedness, supply chain logistics, software development, and many other domains, the AI in question is typically some form of ML. 

Despite the click-bait title to this post, AI is extremely prevalent theme of recent science fiction. Isn't it? Well, that depends which AI. Science fiction has been curiously slow, even reluctant, to reflect the ML renaissance. Until quite recently, ML research has tended to de-emphasise anthropomorphic Artificial General Intelligence. Instead it has emphasised domain-specific AI applications. Examples include Snapchat’s AR filters, Google Translate, Amazon Alexa, Tesla Autopilot, ChatGPT, MidJourney, platformized markets like Uber and Airbnb, the recommendation engines that drive Netflix and YouTube, and the curation of social media feeds. 

As a comparison, in May 2023, Science Fiction Encyclopaedia entry for AI still tellingly states: “Most writers would agree that for a computer or other machine of some sort to qualify as an AI it must be self-aware.” Over the past decade, science fiction about AI has continued to coalesce around questions such as: Is it possible for a machine to be sentient, to experience emotions, or to exercise free will? Between humans and machines, can there be sex, love, and romance? Will our own creations rise up against us, perhaps by departing from the rules we set them, perhaps by applying them all too literally? Could an AI grow beyond our powers of comprehension, and become god-like? And what might the oppression of sentient AIs teach us about colonialism, racism, misogyny, ableism, queerphobia, and the systemic treatment of some lives as morally more valuable than others? 

Whether or not these questions make for good stories, or are interesting questions in their own right, they are not tightly integrated into the realities of AI research. This disconnect between science fictional AI and real AI is also reflected in science fiction scholarship. AI Narratives: A History of Imaginative Thinking about Intelligent Machines (2020) is a recent collection of critical essays on AI and literature. While frequently compelling and insightful within its chosen scope, it barely mentions Machine Learning. Terms such as bias, black box, explainability, alignment, label, classifier, parameter, loss function, architecture, or supervised vs. unsupervised learning, appear seldom or never. (I think there are two, maybe two-and-a-half chapters that are clear exceptions). 

Of course, there are some stories that engage deeply with Machine Learning as it is actually practiced. My impression is that these stories remain rare overall, and that they have yet to coalesce into their own richly intertextual conversation about Machine Learning. Some promising counterexamples emphasise 'the algorithm' or 'the platform,' rather than AI as such. They find some storytelling space where a new discourse intersects with an old one: where Critical Data Studies meets the old science fictional delight in robots rigorously following rules, and the humans that might get ground up in those unstoppable cogs. However, even in their more critical moments, many such stories are prone to reinforce the political and ethical framings preferred by tech companies. We can speculate why this might be the case. The economic conditions of their production are worth noting — is there a preponderance of storytelling funded by think tanks, academia, tech companies and tech media, perhaps? Or perhaps there is a sort of discursive predisposition at play, related to the amount of energy it takes to speak outside of the established science fiction tropes. Having laboriously disentangled themselves from questions like, “Please may I have an AI girlfriend?” and “Crikey will I get an AI God?”, are these stories too exhausted to escape from questions like, “How can we balance the need for training data at scale with the privacy rights of individuals?” and “How will the widespread adoption of AI and automation impact jobs and the economy”? Such questions may need to be posed in some contexts, certainly. But they also carry deep techno-solutionist and techno-determinist assumptions. Science fiction could do better!

Writing in mid-2023, there are signs that some aspects of this situation may soon shift. A more recent critical collection, Imagining AI: How the World Sees Intelligent Machines (2023), which does solid and timely work in challenging Eurocentrism in literary and cultural AI, does pay a little more attention to Machine Learning. Even if writers have been ignoring Machine Learning, Machine Learning has not been ignoring writers. And now OpenAI’s ChatGPT is creating an unprecedented level of conversation in online writing communities around Machine Learning. Very recently, Science Fiction Writers of America collated on its website over fifty articles and posts written by its members on the topic of using AI in creative work. Prominent science fiction magazine Clarkesworld recently closed to submissions after getting inundated with ChatGPT-generated stories. The window for limiting global heating to 1.5 degrees, agreed in the 2015  Paris Agreement, is more-or-less closing now, and questions are being asked about the carbon cost of computationally intensive Machine Learning (Vicuna is being touted as a lightweight ChatGPT alternative). Hollywood writers are on strike about, among other things, AI. And in the midst of a messy public rivalry between Google and Microsoft, we are witnessing a sort of convergence of discourse about (the social implications of) Machine Learning with older sci-fi tropes: AGI, Singularity, superintelligence, x-risk. 

Whether or not we are at a turning point, it is certainly a moment to take stock of the last decade of science fiction about AI and ask: Is it possible that the few narratives that engage fruitfully with Machine Learning do so despite, rather than because of, the distinctive affordances of the genre? Compared with most other discourses, has science fiction been good at thinking about Machine Learning, okay at it, or maybe especially bad at it?

Sunday, November 15, 2020

Algorithmic governance fiction

This is a question really. I have written some fiction on the theme of algorithmic governance (algorithmic governmentality, algocracy). These stories try to explore the friction you experience living in automated processes that have a particular model of who and what you are, and create affordances and nudges based on that model. I'm sure there are lots of works like this out there (and / or exploring algorithmic governance in other ways) and I'd like to make a list. Suggestions welcome, and I'll try to remember to expand this when I come across more.

For example:

  • The novel If/Then by Matthew De Abaiuta (and here's a paper I co-wrote about that)
  • Cory Doctorow's 'Petard'
  • Robert Kiely and Sean O'Brien's essay 'Science Friction' has some useful context and maybe some leads
  • Sarah Gailey's 'Stet,' about self-driving cars
  • Surian Soosay's 'Portrait of an Amazonian,' maybe?
  • Kate Crawford, 'Can an Algorithm be Agnostic?': an article deploying vignettes
  • Ferrett Steinmetz's article '8 Science Fiction Books That Get Programming Right' doesn't focus on algorithmic governance per se but is adjacent and interesting
  • Catherine Lacy, 'Congratulations on your Loss' ... see below!
  • Daniel Suarez' Daemon and Freedom (tm)
  • 'Life's a Game' by Charles Stross (gamification theme) in Twelve Tomorrows
  • Tochi Onyebuchi, 'How to Pay Reparations: A Documentary' (explores algorithmic reparations ... mentioned briefly in a thing I wrote for Edinburgh Companion to Science Fiction and the Medical Humanities)
  • Tim Maughan's Infinite Detail (surveillance theme, ghosts)
  • Naomi Kritzer, 'Better Living through Algorithms' (kind of interesting heterotopian take on algorithmic governmentality)
  • Yoon Ha Lee's 'The Erasure Game,' sort of (gamified health and wellbeing dystopia)
  • Yoon Ha Lee's 'Welcome To Triumph Band,' sort of (see below)
  • An older work, but maybe Kurt Vonnegut's Player Piano (automation, psychometrics)
  • Another older work, but maybe Frederik Pohl's 'The Midas Plague' (automation, dystopian post-scarcity)
  • Joanna Kavena, Zed (haven't read, but was recommended in this area)
  • Marc-Uwe King, Qualityland (ditto)
  • Nick Harkaway, Gnomon (ditto) 

My stories are e.g.:

There is also another SFF trope that is adjacent to this, which you might call AI takeover, which is much more to do with automated processes behaving in unexpectedly anthropological ways. There are obviously overlaps, but I think mostly I'm interested in something else here. As Janelle Shane of AI Weirdness puts it (I'm paraphrasing), the risk is not that AI won't do what we ask, but that it will do exactly what we ask. I think what I'm looking for is Algorithmic Governance Weirdness, a subset of AI weirdness. 

For theory, here's a wonderful reading list resource: Critical Algorithm Studies.

There's an obvious affinity between Algorithmic Governance Weirdness fiction and the technothriller, or postcyberpunk in a gritty five-minutes-into-the-future vein (see 'Science Friction'). But various contemporary techno-dystopias, such as Tlotlo Tsamaase’s ‘Virtual Snapshots’ (2016), Yoon Ha-Lee’s ‘Welcome to Triumph Band’ (2018) and ‘The Erasure Game’ (2019), and Catherine Lacey’s ‘Congratulations on Your Loss’ (2021), even if they venture into slightly more flamboyant and satirical worldbuilding, nevertheless still offering grimly comic images of life as a fungible human — intimately surveilled, measured, and groomed within regimes of absurd and unaccountable gamified power. In Lee’s ‘Welcome to Triumph Band,’ for example, the reader is inducted as a marching band cadet and informed, “You will be periodically tested at the firing range and expected to maintain acceptable aggregate scores. Be warned that, due to the variation in threat levels from terrorists and domestic threats, what constitutes an acceptable score may be revised periodically” (Lee, 2018). 

Across many such works, automation is presented as disempowering people more than freeing people from toil. Loosely speaking, the delicate dance of data and human behavior also expresses the struggles of capital and labor, and so expresses proclivities of enclosure and extraction. Automation doesn’t only ‘steal jobs’ but also transforms relations of production in ways that might be described as “fauxtomation” (Taylor 2018) or “heteromation” (Ekbia and Nardi 2017). You can be made to work in new ways. You can be made to want or need new things, or be made to buy things you traditionally got for free. “Hadn’t I been in line yesterday to escape the rise in sunlight prices, effective today?” laments the protagonist of Tsamaase’s ‘Virtual Snapshots.’ Sometimes these new enclosures and extractions are narrated, as in ‘Welcome to Triumph Band,’ in a fascist bark; at other times, in the chirpy or soothing style of a wellness guru.

In a lot of these works, a residual classic dystopian imaginary is discernible. Catherine Lacey’s ‘Congratulations on Your Loss’ (2021) is an elegantly dreamlike account of the psychic damage of living in a regime of unaccountable algorithmic disciplinarity. Yet Lacey’s story doesn't really seem willing to seek beyond rote Orwellian rituals for the source and significance of the techno-social conditions it portrays. When the data dystopia speaks — a kind of Big Mother, sternly simpering that “we must understand that the integrity of society in general is of a greater importance than of society in specific, that is, people are more important than a person” (Lacey 2021) — what gets erased is the concrete history underpinning the rise of the contemporary surveillant smart city, as well as contemporary communities of resistance. Does the arrival of this dystopia really owe nothing to the gig economy’s promise of flexible work tailored to the needs of “a person,” you; to e-commerce and targeted ads tailored to your interests; to the imperative to develop your personal brand and monetize your lifestyle; to the algorithmic processing of populations for gentrification and incarceration via credit scoring, via recidivism analysis, and via police predictive analytics; to Google Search and Google Maps? To the obnoxiously individualistic ideology of hustle culture, and its claims that 'a person,' you, is always more important than 'people'? I think this becomes a story that makes not only its characters, but also its readers, question their perceptions, memory, and judgments.

Saturday, June 20, 2020

AI Dungeon

From AI Dungeon.

Starting Prompt: You live on the edge of a vast lake. The lake is salt water. You are perhaps ten years old. You're not sure. There's a well in the village. Your father is a fisher. The forest is dark and full. You hate the lake, which is as vast as a sea, because you think it fake. You think it is full of shadows, more shadows than water, and that the fish hide there, and perhaps walk about. The people in your village have two names, depending on the season. Tomorrow, everyone will change to a new name.

AI Dungeon 2 Instructions:
Enter actions starting with a verb ex. "go to the tavern" or "attack the orc."
To speak enter 'say "(thing you want to say)"' or just "(thing you want to say)"
The following commands can be entered for any action:
"revert" Reverts the last action allowing you to pick a different action.
"quit" Quits the game and saves
"restart" Starts a new game and saves your current one
"save" Makes a new save of your game and gives you the save ID
"load" Asks for a save ID and loads the game if the ID is valid
"print" Prints a transcript of your adventure (without extra newline formatting)
"help" Prints these instructions again

Generating story...

You live on the edge of a vast lake. The lake is salt water. You are perhaps ten years old. You're not sure. There's a well in the village. Your father is a fisher. The forest is dark and full. You hate the lake, which is as vast as a sea, because you think it fake. You think it is full of shadows, more shadows than water, and that the fish hide there, and perhaps walk about. The people in your village have two names, depending on the season. Tomorrow, everyone will change to a new name. One day, they'll all be called by their true names

> Run into the forest

You go to the forest. It is cold and dark. You see no one. No one has ever been here before. You can't find anything. You look around for something to eat, but there is nothing. You feel hungry, though. You want to eat something. Something sweet

> Feel afraid

You feel scared. You are alone. You are scared. You don't know what to do. You need help. Help is coming soon.

> Sing a song

You begin singing a song. The world becomes brighter. You feel happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

 > Climb a tree together

You climb up a tree with your friend. You are so happy. Your happiness fills you. You love each other. You are happy. You will never die

> Pick a fruit

You pick a fruit. You are so happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Bite into the persimmon. The fruit runs down my chin

You bite into the persimmon. The fruit runs down your chin. You are so happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Find a giant bird

You find a giant bird. You are so happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Climb on the bird's back and fly to the snowy mountain

You climb on the bird's back and fly to the snowy mountain. You are so happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Feel unhappy

You feel unhappy. You are alone. You are sad. You cannot find anyone. You must find someone to make you happy. Someone must come soon. Soon you will be free. Soon you will..

> Be happy even though I am alone

You be happy even though you are alone. You are happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Melt the snow and drink it

You melt the snow and drink it. You are so happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Carve a magic rune

You carve a magic rune. You are so happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will..

> Restart

You restart. You are happy. You have found someone. Someone will come soon. Soon you will be free. Soon you will.

> restart

Please rate the story quality from 1-10: 9

Game saved. To load the game, type 'load' and enter the following ID: ec6be256-1888-11ea-b8cb-0242ac1c0002

Monday, February 10, 2020

BSFA Awards Shortlist 2019

... is out!

I like literary awards for loads of reasons, one of which is that they raise similar questions to AI and automation: Can you design and implement a procedure that will reliably replicate human judgment? You run the program again and again, and sometimes it appears to work, and sometimes it's a bit dodgy, and sometimes it's so dodgy you consider tweaking the code. (See note). In fact, it's probably time we had literary awards judged by AI. I will fully get on that.

Also, I like the way they are a democracy cosplay.

Also, I like the way they make so many people incredibly grouchy.

Including me. Comparing longlist with shortlist, I'm personally a little disappointed not to see Sandra Newman's The Heavens and Tim Maughan's Infinite Detail up for Best Novel. Zen Cho would have been great too.

Other minor kvetches? Well, no shade to the brilliant work of the shortlisted artists, it feels a slight shame that the BSFA Award for Best Artwork has once again effectively become the BSFA Award for Best Cover, after a just slightly more varied selection last year. I pledged 50 superdelegates to Cedric Mizero's A New Life in the Village exhibition and 50 superduperdelegates to SinJin Li Studios' Productive Futures conference ephemera (notably an insanely gorgeous booklet, but also name badges sporting symbols later revealed to assign each conference-goer to a science fictional profession and class status: this was a conference with worldbuilding), but no dice.

One of the nice and slightly precarious things about the BSFA Awards specifically is that they don't boundary-police too much, but despite technically allowing any genre fiction authored anywhere in the world -- there is no hard design rule, for example, that prevents the British Science Fiction Association Awards from being completed flooded one year by US-authored epic fantasy -- they do keep a focus on what you might roughly flail your arms at and designate "British SF."  Still, in recent years Best Novel has included the likes of Yoon Ha Lee, Ann Leckie, Tricia Sullivan, Aliette de Bodard, and Nnedi Okorafor, and in Brexit year especially, it might have felt better to have a shortlist that wasn't so Very Very UK?

But like I say, these are nano-kvetches and overall -- looks like a seriously strong list! I've not read many of them or anything, but, you know, I Know the authors' Work. A bit of a mix of big publishers and smaller indie stuff is also a plus. I feel like BSFA Awards have a kind of Iowa and New Hampshire energy to them. Not bellweathers exactly, but at least an upward spurt of force that might propel something unusual into the swirling weather of the Nebulas and Hugos. Well done algorithm and the horde of human hearts that execute it.

The Non-Fiction category is a weird one, since it's so often comparing across different forms and modes. One big glitch: a piece I wrote for Big Echo, about Star Trek and work, slipped onto the Non-Fiction shortlist, among proper big books with squillions of hours of research and labour embedded in them.

Also maybe kind of interesting: the Shorter Fiction category is dominated by novellas. Why??? IS THE SHORT STORY DEAD???

Note: Except it's weird, because most of the circuitry this 'AI' is running on is actually made out of human judgments in the first place, only these are a different set of judgments from the set of judgments it's endeavouring to simulate. I.e. the judgment it is attempting to simulate is something like, "What is the best eligible book?" Whereas key judgments which actually operationalise the simulation include things like, "What system shall we use to tally up the votes? What stages shall we have? How shall we time things? Is this particular borderline work eligible in the first place?" And, "Shall I join the BSFA? Shall I vote? Shall I vote for this thing by my friend? Shall I vote for this thing I haven't read but which is by an author whose work I admire?" So related judgments, but different. Also, of course, not just judgments but also plenty of work, in particular the work of BSFA Award Administrator Clare Boothby.

Tuesday, April 2, 2019

Notes on three SF short stories


Originally posted at the Vector blog.
The 'trolley problem' is a philosophical thought experiment (and in a way, it’s also a little SF story in itself). There’s a train heading down a track where it will kill five people. You can switch the train to another track, where it will kill one person. Do you do nothing? Or pull the lever?
It gets interesting when you start to introduce variants. What about pushing someone off a bridge onto the train track, if you knew it would save five people further down the line? What if there are five people in mortal need of organ donations — and suddenly a stranger with just the right five healthy organs inside rocks up in town? Such thought experiments are generally pretty annoying. They can be a useful way to map out our moral intuitions, and identify contradictions and biases in our moral reasoning we might not otherwise recognise.
The trolley problem has been getting a lot more press recently. But it’s a new kind of fame: now it’s become a practical problem, a real challenge for AI programmers. How should we program AI to act in situations like these?
At least two stories on this year’s BSFA suggested reading list deal with AI and the trolley problem: Sarah Gailey’s ‘Stet‘ (Fireside) and (as you might guess) Pat Cadigan’s ‘AI and the Trolley Problem‘ (Tor.com). I read both stories as responses to the increasing role of AI in our everyday lives, but I also think they’re responses to the way SF has handled the trolley problem in the past. Actually, SF has long been in love with the trolley problem … and it’s a grisly, nasty kind of love. I’m talking about tales like E.C. Tubb’s ‘Precedent’ (1949), Tom Godwin’s ‘The Cold Equations’ (1954), Robert Heinlein’s Farnham’s Freehold (1964), Larry Niven’s The Ringworld Engineers (1980), and Orson Scott Card’s Ender’s Game (1985). These are fantasies carefully set up to celebrate difficult but supposedly necessary sacrifices. “Yeah but imagine a situation where you HAD to commit genocide,” *vigorously rubbing the tops of his thighs* “.. in order to avert WORSE genocide!” No thanks! — not least because that’s exactly how actual perpetrators of genocide generally narrate their actions. Cory Doctorow has a nice article about ‘The Cold Equations’ and Farnham’s Freehold which makes similar points.
In very different ways, Gailey’s ‘Stet’ and Cadigan’s ‘AI and the Trolley Problem’ deliver clapbacks to this tradition…

*

Cadigan’s story 'AI and the Trolley Problem' is a kind of whydunnit? or whatwillitdunnext? The AI in Cadigan’s story is a bit more your traditional science fictional AI: Felipe is like a starship’s computer, except this time the starship is a US military base set in desolate fenland somewhere in the east of England. Felipe is an entity, a subjectivity, an agent, like us but not like us. I heard him speak in the voice of Lieutenant Data from Star Trek: TNG. (Data-ed but not dated — in Cadigan’s hands, the trope of the calm, hyperrational, introspective AI mind remains an effective tool for exploring the present and future of real AI).
On a normal day on the open road, you probably won’t encounter THAT many classic trolley problems. It’s not really something they cover in driving lessons, like parallel parking. In fact, there’s an argument that the obsession with trolley problems is a distraction from the real issue with smartcars. Machine vision will probably never be smart enough to navigate safely in today’s traffic conditions, especially in cities … so we’re likely to see a push from developers to redesign our entire city infrastructure around the technology. Does it sound too far-fetched and dystopian to imagine a law that everyone carry their phone at all times, broadcasting their location to all nearby vehicles? And if your battery dies and so do you, that’s sad but too bad.
But Cadigan’s story says, hold on, maybe we are surrounded by trolley problems after all? Maybe we choose to construct our world out of negative sum games? What if more and more data gathering and analysis makes these relations more and more visible? The true ‘cold equations’ are probably nothing like the fantasies of Godwin and Card. Markets, corporations, and governments can all be likened to AI programs, and we know how they’re programmed: to relentlessly sacrifice the many vulnerable for the few privileged. Felipe’s refusal of their logic kind of reminds me of the elegantly straightforward ethics of the utilitarian philosopher Peter Singer. Often our ethical conundrums aren’t really as complicated as we make them out to be … we know what the right answer is, and we overstate the complexity to hide our self-interested actions.
‘”Felipe . . .” Helen sighed. “Felipe, you must not kill our people. People on our side. People who are fighting to—” She was about to say make the world a safe place, but it sounded lame even just in her head. What, then? Fighting to prevent an enemy from attacking us? Fighting to rid the world of terrorism? Fighting to defend people who can’t defend themselves? Fighting to free the enslaved and the downtrodden?’
Spoilers: OK, personally I felt ‘AI and the Trolley Problem’ wound up satisfyingly, but some folk in the comments disagree. I guess this is a story which turns on two reveals — the reason Felippe destroyed the ground control station, and the reason Felippe isn’t talking to anyone. Maybe Cadigan could have been a BIT more heavy-handed about pretending that they were connected … having everyone running around working like they’re living in the early days of Skynet? Then when they aren’t connected, that might feel like more of a twist in its own right. Even better if the reader could be persuaded to have almost forgotten about the Cora incident by the time it becomes relevant again — but maybe that’s asking the impossible. Overall, I felt it was timely and slick. The setting was great: a cosy yet chilly atmosphere evoked with economy, mainly through the actions and relationships of the characters. More SF should be set in US military bases. There are enough to choose from: something like 600 outside the US, across 70ish countries. (Those we know about).

*

If you didn’t already know what stet meant, you’d probably gather its meaning by the end of ‘Stet’: disregard a change proposed by the editor. It’s a Latin word that means “Allow it” or “Let it stand.” Compared with Cadigan’s story, Gailey’s ‘Stet’ is more directly informed by contemporary AI research, especially machine learning. This kind of AI research isn’t so interested in replicating minds: it’s more like the offspring of computer science and statistics, crunching huge data sets to find useful patterns that humans would never be able to see for ourselves. Our inability to see them, in fact, proves to be a big problem. This algorithmic ‘reasoning’ is opaque, unintuitive, and not something we can interact with though a philosophical dialogue, however strange or uncanny.
Is there a word for a story like this, that purports to be a document or documents? — in this case, the draft textbook entry with a copyeditor’s comments and the author’s responses? Apparently ‘epistolary fiction’ is supposed to cover all this kind of stuff, but with writers telling stories in wiki talk pages or Kickstarter pages, the inkpot-and-quill vibe of ‘epistolary’ doesn’t feel quite right. Not sure about ‘scrapbook story’ either. (Then again ‘digital’ comes from counting on your fingers and toes, so maybe I just need to give the vibe time to change).
‘Stet’ is a story about resistance and about saying no; it’s about solitude and loss. The voice is wrought in grief and venom, although there is somehow also bleak humour here as well, both in the bumbling inadequacy and emotional awkwardness of the editor who tries to contain Anna, and a few other touches (I bet Elon Musk DOES call his autobiography Driven. Driven: What Makes The Muskrat Guard His Musk). I even wondered if the ‘woodpecker’ thing might be some sort of weird ‘got wood’ porno pun, since people don’t really spend all their days gazing at rare woodpeckers online, but they do look at lots of stiff dicks … I’m definitely reading too much into it. Maybe the woodpecker just had an unlikely viral friendship with a piglet. With its mixture of erudition and boiling-but-controlled personal witnessing, ‘Stet’ has the energy of a virtuoso Twitter thread (maybe the kind that ejects interjecting mansplainers with enough kinetic energy to reach escape velocity. No more trolly problem).
Introducing any automated process, but perhaps machine learning in particular, into decision-making can create serious ripples in the way responsibility and accountability work. Anna is desperate to find responsibility, and maybe a semblance of justice, in a system which thoroughly disperses and confuses it. She even makes the intriguing provocation that we are responsible for the unintended outcomes of the data we generate. After all, who else is there to be responsible? Our machines don’t just hold up a mirror to our nature, so that we can trace in fine detail what we attend to, what we care about: the image can now step out of the mirror and start to act in the world alongside us.
“Per Foote, the neural network training for cultural understanding of identity is collected via social media, keystroke analysis, and pupillary response to images. They’re watching to see what’s important to you. You are responsible.”
The idea has a kind of appealing theological relentlessness to it.
But it also makes me think there is special providence in the fall of a woodpecker. Even if he can’t wonder what is past the clouds. Could cherishing an endangered woodpecker be part of a necessary ecological consciousness which ultimately ameliorates suffering and averts death on a massive scale? But I wouldn’t and shouldn’t suggest such a thing to Anna, or write a smug short story where the equations are carefully calibrated to produce that result. And anyway, am I just overstating the complexity of this moral question?

*

Laurie Penny’s ‘Real Girls’ is perfectly paced and well-woven with wit and whimsy. It’s part of a series for WIRED imagining the future of work. It’s only wee and worth a read. Also, here is the Androids and Assets podcast talking about the series as a whole.
'Real Girls' is partly a story about talking – or chatting – and the word ‘talk’ lurks behind the title, which recalls both GIRL TALK and REAL TALK. Because of the theme of the series, the title perhaps also recalls WORKING GIRLS. And the title is kind of mischievous – more on that in a moment.
OK, some quick thoughts with mild spoilers. ‘Real Girls’ bristles with zeitgeist like a theme anemone, but maybe its prime frond is (as indicated by its epigraph – “When your robotic lover tells you that it loves you, should you believe it?”) THE LIMITS OF AUTOMATION.
Here’s one common take on the current wave of automation. Robots are capable of many, but not all, of the tasks currently done by humans, and doing them more cheaply. If handled wisely, ‘more cheaply’ could also mean more sustainably, efficiently, reliably, safely, and beautifully. So we can expect permanent, structural technological unemployment, within certain natural limits. But this technological employment won’t really touch sectors where affective or emotional labour is really important. 

In particular (the analysis goes) the heartland of human competence is care work, something robots are just intrinsically terrible at. Then there are the various contested territories of sex work, art, literature, culture, education, and ‘services’ broadly construed, which will all probably see partial automation. We’re not quite sure yet what robots can and can’t do in those zones, but as they gradually come up against more and more natural limits, those limits will determine what human jobs finally remain. 

Then, to soak up the surplus labour, we can expect that these remaining jobs will multiply and transform a bit (see for example the massive shortages of mental health services in the UK currently); that some new jobs looking after the robots will materialize; that all jobs will be more intensively shared (shorter standard hours, plenty of gig work, perhaps supplemented by a Universal Basic Income); that logistics and geographical location will play a greater role in shaping this sharing (see Deliveroo); and that activities previously undertaken for non-monetary reasons will be converted into paid work (e.g. household labour and what is sometimes called ‘socially reproductive labour,’ and aspects of leisure and voluntary activities). You can see this set of assumptions about automation reflected in the way contemporary SF often tantalisingly teeters between its concern with narrow AI and strong AI. It’s like we’re wondering: is strong AI just a jigsaw of narrow AI pieces? Are there some jigsaw pieces in the pile that are intristically human? (See the comments on the trolley problem stories, above).
Some elements of this future narrative are present in Penny’s ‘Real Girls.’ Charlie, the protagonist, is a precarious digital affective labourer, paid to pretend to be somebody’s girlfriend online. But ‘Real Girls’ also has some clever touches, which convey a much more nuanced idea of automation and its relation to human work. For starters, Charlie isn’t really pretending to be somebody’s girlfriend. He is (maybe in more ways than one) pretending to pretend. That is, Charlie is being employed to imitate an AI girlfriend: “a lot of lonely people liked the idea of having a robot girlfriend who was always on call and had no feelings of her own, a remote algorithm that could shape itself to your particular needs—they’d seen it on TV. But the technology just wasn’t there yet.”
This subterfuge is one way ‘Real Girls’ shows us that the uncertainty about what constitutes ‘human work’ isn’t just a phase we’re living through, a question that will eventually be answered. Such questions are a permanent and pervasive feature of contemporary technologized society, where every encounter you have with anybody or anything might have involved automated decision-making somewhere you can’t see. (See Cathy O’Neil’s Weapons of Math Destruction, and a lot of critical data studies). The questions themselves have agency, they do things: you don’t get to the facts by clearing them away, because they themselves are facts.
Likewise, ‘Real Girls’ doesn’t give us a world where machines simply slip into roles previously occupied by humans. The set of available tasks is not some given, transhistorical invariant. As another SF writer, Tim Maughan, has suggested in both his fiction and non-fiction, the assumption that plasticky Christmas tat was probably made by machines is part of what makes it palatable to consumers. If they were to even glimpse the worker who hand-paints thousands of plastic Santa eyeballs each day, they might well forswear that falalala shit for life (or at least go crusty and local with it). In the same tranche of writing, Maughan also looks into the automation of logistics — enormous container ships algorithmically packed and unloaded. So that’s one example of how automation can actually create opportunities for unrewarding and repetitive human toil, rather than replacing it.
More generally, whenever automation enters some sphere of activity, its entry agitates fundamental questions of organisation and purpose – potentially in revolutionary ways, although in practice often in terribly regressive ways. Automation does not replace, it transforms. At the same time, we use residual discourse to construe our new acts: Charlie is a kind of freelance writer, and his friend who puts him onto the job is a kind of jobbing actor. And Charlie’s job – ‘pretending to be a machine pretending to be a human girlfriend’ – exists because of automation, but not in any straightforward way.
(These transformative ripples may reach far beyond the immediate work context. You can easily imagine some guy who would NEVER stoop to having a robo-girlfriend himself, but feels totally legit in comparing his flesh and blood girlfriend unfavorably to the robo-girlfriend he would at least supposedly never have. In this respect ‘Real Girls’ also resonates, for example, with Annalee Newitz’s Autonomous, in which the spread of person-like AI has the unintended consequence of normalizing human indenture).
Charlie’s job is also an example of what Astra Taylor calls ‘fauxtomation.’ Broadly speaking, fauxtomation is just automation that is not all it’s cracked up to be: automation which transforms human labour in ways which fall well short of the hype. One example Taylor offers are automatic check-out machines. This is automation, but it also creates a new form of human labour (as it happens, unpaid, albeit fun and boopy). It also creates a whole new role for a human: the automatic check-out machine whisperer, who must rush back and forth around confirming over-18 purchases, troubleshooting unruly butternut squashes, rebooting the one cranky machine again and again and finally summoning the engineer, etc. Taylor cautions that fauxtomation can reinforce the idea “that work has no value if it is unpaid” and acclimatise us “to the idea that one day we won’t be needed.” Drawing on Ruth Cowan’s work, she gestures to all those household innovations which were supposed to relieve domestic labour in fact “added to the list of daily chores for women confined in the cult of domesticity.” 
Perhaps another example of fauxtomation, even closer to what’s going on in ‘Real Girls,’ is the huge amount of human toil which often goes into training algorithms. For example, a recent BBC article gives a glimpse of a day in the life of a worker for Samasource, whose clients include many big tech names. Brenda is working on a machine vision project: “Brenda loads up an image, and then uses the mouse to trace around just about everything. People, cars, road signs, lane markings – even the sky, specifying whether it’s cloudy or bright. Ingesting millions of these images into an artificial intelligence system means a self-driving car, to use one example, can begin to “recognise” those objects in the real world. The more data, the supposedly smarter the machine.” The invisible labour is bad enough, but why is Samasource headquartered in San Francisco, when its operations are in one of the poorest parts of Kenya? We can trace this moderately exploitative relationship back through history, into the vast web of bloodshed of capitalist colonialist exploitation. Karl Marx liked to think of commodities as the mashed up muscles and nerves of workers. Perhaps when we think of all the conveniences that machine vision can bring, we should think about whose mashed up eyeballs are really doing the looking.

The story is also about the relationship between (to put it crudely) work in the "public" sphere and work in the "private" sphere i.e. housework and cooking. Charlie is kind of a humanised version of the gross deadbeat (ex-)boyfriend, a toxic softboy Becky needs to cut decisively out of her life. They have familiarity and intimacy, and maybe even a faint spark between them -- unless it's a faintly luminous globule of spilled cheese? -- but it seems like they're really just not that into each other. You really suspect that this unsustainable relationship might be underpinned by an unsustainable division of labour, and that Charlie's late rent is only the tip of the iceberg. 

Work always produces (or reproduces) at least two things: whatever you are working on, and you, the worker. Charlie is obviously kind of fallow. He needs to get out more, probably? He's probably depressed in a way that goes beyond (but takes in) having been dumped. So there is a faint hint here of crip labour and crip temporalities. Not that Charlie identifies as disabled or anything, it's more that ... yes, Becky's life is going places and Charlie's isn't; yes, Becky does all the work and mental load of keeping their apartment nice and Charlie is a standard-issue wallowing excrescence of the patriarchy; but at the same time, the story themeatises norms around work, and how working, feeling motivated to work, working in particular roles, and working to particular standards, can be tied up with feelings of self-worth. There's this intriguing moment of spillover, where Charlie's paid digital labour galvanises him into cooking a mac and cheese, so that he can take photo:

In a panic, and forgetting entirely that he could have simply searched for images, he looked up a recipe. Then he got a bit carried away going through the cupboards. The oven was cranky and hard to turn on and he burned himself twice, but the pictures alone were worth it.
Becky rolls in drunk, and is very into this mac and cheese. "Who are you and what have you done with Charlie?" The kitchen is a mess; Charlie winces; Becky tells him she'll clean it in the morning ... and later Charlie thinks back to this as a moment suggesting they might have a future together after all. I'm not sure what to think about that.

Okay, slightly more substantial spoilers now. This is where the mischievousness of Penny’s title comes in. ‘Real Girls’: we might think it’s going to be a story about the differences between real humans and artificial humans. But we also soon learn that Becky “hated it when he called her a girl, even though she was the only girl, The Girl.” Can there ever be such a thing as a real girl, when ‘girl’ itself is an artificial construct? (A construct largely, if not solely, of patriarchy. “You’re real to me,” Charlie murmurs to the sleeping Becky, a sweet but disquieting moment).
It seems like Charlie maybe discovers something new about himself during this story. We’re not sure exactly, since it is done with a skilfully light touch: Penny sensibly resists any temptation for a big, sensationalistic ‘reveal’ regarding Charlie’s sexual desires and sexual identity. But she still gives us a sense of metamorphosis, and the possibility of transformation is mirrored between micro and macro: just as Charlie is always a work in progress, so we will never discover out once and for all what it is to be human.
For a genre so invested in the non-human and the post-human, science fiction also loves to play with definitions of the human. It loves to hone in on some differentia specifica – our capacity to envision, or dream, or laugh, or do really good downward-facing-dog, or grieve, or whatever – that is supposedly what makes us truly human. But these formulae always feel reductive and awkward, like a well-meant compliment from a relative that just shows how little they know you. Humanness is not some kind of empty space left behind once technology has finished colouring in all the reality it can reach. Technology (as Donna Haraway and others have pointed out for ages) has never been opposed to or outside of humanness: it has always been part of humanness. This is something ‘Real Girls’ seems to get: it is misleading to think of automation as having inbuilt limits. Automation is not an unstoppable tide, which is going to wash us clean and show us what we really are. Automation is a high stakes set of political risks and opportunities. The question is never just, ‘Should robots be girlfriends?’ it is always, at least, ‘What is a girl and what is a boy and what is a friend and what is girlhood and friendship and romance and gender and love and sex and desire and how did all these things get to be what they are now and what could all these things be instead of what they are now?’
Fwiw, the post-human romance aspect to Penny’s story also ignited a bunch of associations for me; in no particular order: Jay Owens’ wonderful essay on her friendship with a bot (here on the Vector site); little Robby’s big date in Miranda July’s film ‘You, Me, and Everyone We Know’; the bit in Pride & Prejudice where Elizabeth visits Pemberley and starts to fancy Darcy (in a way which feels uncomfortably mercenary to many modern readers, but is after all an encounter with the post-human Darcy assemblage: his wealth, definitely, but also the taste and sentiment manifest in the landscaping, the rumours of his kindness from Mrs Reynolds); Camilla Elphick et al.’s project Spot, which explores the use of artificial agents in harassment disclosure (and where at least one user reported they were glad the chatbot didn’t attempt to seem empathetic); and various friends of mine who first met and/or got together on the internet. All of these associations, I guess, ways of being anxious for the budding lovers: will Charlie and his Boy get along, now that their technological assemblage has been so radically reconfigured?
Overall, ‘Real Girls’ a wonderfully polished, smart, and timely SF story. Obviously I was intrigued by other stories this story could have been: for example, the one which got more deeply into emergent and speculative sextech, and saw Charlie being invited to control a VR avatar? I also guessed (wrongly) that the Boy would turn out to be a neural network whom Charlie was being paid to train. What does this say about me.

I don’t think this size of story could successfully accommodate it (it's shorter than this review), but it also would have been interesting to explore a hybrid AI-Charlie girlfriend, perhaps leading into more speculation around how automation and AI can be mobilised in making the experience of work more hospitable, exciting, and just generally just; I’m quite interested in Parecon’s concept of “job complexes” — innovative divisions of labour based on ensuring workers are equally empowered — and I hope at least some writers in the Wired series have incorporated some Parecon-ish speculation into their worlds? (Contemporary SF as a whole sometimes feels a bit stuck in the utopian-dystopian axis of the gig economy). And I wondered if the mac and cheese incident could have been tweaked to allow just a teeny smidge more foreshadowing of Becky’s heart still being open to Charlie? But perhaps that would diminish the gentle twistyturniness of the closing moments.
And really: I think this story perfectly accomplishes everything it sets out to do, and perhaps a little extra. I wouldn’t change a thing.

*

And to finish, just because I just read it, and it feels relevant, here is Kim Stanley Robinson, in New York 2140:
At that point, as it turned out, despite the chaos and disorder engulfing the biosphere, there were a lot of interesting things to try to latch that barn door closed. Carbon-neutral and even carbon-negative technologies were all over the place waiting to be declared economical relative to the world-blasting carbon-burning technologies that had up to that point been determined by the market to be “less expensive.” Energy, transport, agriculture, construction: each of these heretofore carbon-positive activities proved to have clean replacements ready for deployment, and more were developed at a startling speed. Many of the improvements were based in materials science, although there was such consilience between the sciences and every other human discipline or field of endeavor that really it could be said that all the sciences, humanities, and arts contributed to the changes initiated in these years. All of them were arrayed against the usual resistance of entrenched power and privilege and the economic system encoding these same, but now with the food panic reminding everyone that mass death was a distinct possibility, some progress was possible, for a few years anyway, while the memories of hunger were fresh.  
So energy systems were quickly installed: solar, of course, that ultimate source of earthly power, the efficiencies of translation of sunlight into electricity gaining every year; and wind power, sure, for the wind blows over the surface of this planet in fairly predictable ways. More predictable still are the tides and the ocean’s major currents, and with improvements in materials giving humanity at last machines that could withstand the perpetual bashing and corrosion of the salty sea, electricity-generating turbines and tide floats could be set offshore or even out in the vast deep to translate the movement of water into electricity. All these methods weren’t as explosively easy as burning fossil carbon, but they sufficed; and they provided a lot of employment, needed to install and maintain such big and various infrastructures. The idea that human labor was going to be rendered redundant began to be questioned: whose idea had that been anyway? No one was willing to step forward and own that one, it seemed. Just one of those lame old ideas of the silly old past, like phlogiston or ether. It hadn’t been respectable economists who had suggested it, of course not. More like phrenologists or theosophists, of course.  
Transport was similar, as it relied on energy to move things around. The great diesel-burning container ships were broken up and reconfigured as container clippers, smaller, slower, and there again, more labor-intensive. Oh my there was a real need for human labor again, how amazing! Although it was true that quite a few parts of operating a sailing ship could be automated. Same with freight airships, which had solar panels on their upper surfaces and were often entirely robotic. But the ships sailing the oceans of the world, made of graphenated composites very strong and light and also made of captured carbon dioxide, neatly enough, were usually occupied by people who seemed to enjoy the cruises, and the ships often served as floating schools, academies, factories, parties, or prison sentences. Sails were augmented by kite sails sent up far up into the atmosphere to catch stronger winds. This led to navigational hazards, accidents, adventures, indeed a whole new oceanic culture to replace the lost beach cultures, lost at least until the beaches were reestablished at the new higher coastlines; that too was a labor-intensive project.