Tuesday, May 30, 2023

Why isn't science fiction interested in AI?

Since the century began, there has been a remarkable surge in AI research and application. This has mostly involved AI of a particular kind: Machine Learning (ML), especially Deep Learning. In brief, ML tends to place much less emphasis on carefully curated knowledge bases and hand-crafted rules of inference. Instead, ML usually uses a kind of automated trial-and-error approach, based on a little statistics, a lot of data, and a lot of computing power. When we hear of AI transforming journalism, healthcare, policing, defence, finance, agriculture, law, conservation, energy, development, disaster preparedness, supply chain logistics, software development, and many other domains, the AI in question is typically some form of ML. 

Despite the click-bait title to this post, AI is extremely prevalent theme of recent science fiction. Isn't it? Well, that depends which AI. Science fiction has been curiously slow, even reluctant, to reflect the ML renaissance. Until quite recently, ML research has tended to de-emphasise anthropomorphic Artificial General Intelligence. Instead it has emphasised domain-specific AI applications. Examples include Snapchat’s AR filters, Google Translate, Amazon Alexa, Tesla Autopilot, ChatGPT, MidJourney, platformized markets like Uber and Airbnb, the recommendation engines that drive Netflix and YouTube, and the curation of social media feeds. 

As a comparison, in May 2023, Science Fiction Encyclopaedia entry for AI still tellingly states: “Most writers would agree that for a computer or other machine of some sort to qualify as an AI it must be self-aware.” Over the past decade, science fiction about AI has continued to coalesce around questions such as: Is it possible for a machine to be sentient, to experience emotions, or to exercise free will? Between humans and machines, can there be sex, love, and romance? Will our own creations rise up against us, perhaps by departing from the rules we set them, perhaps by applying them all too literally? Could an AI grow beyond our powers of comprehension, and become god-like? And what might the oppression of sentient AIs teach us about colonialism, racism, misogyny, ableism, queerphobia, and the systemic treatment of some lives as morally more valuable than others? 

Whether or not these questions make for good stories, or are interesting questions in their own right, they are not tightly integrated into the realities of AI research. This disconnect between science fictional AI and real AI is also reflected in science fiction scholarship. AI Narratives: A History of Imaginative Thinking about Intelligent Machines (2020) is a recent collection of critical essays on AI and literature. While frequently compelling and insightful within its chosen scope, it barely mentions Machine Learning. Terms such as bias, black box, explainability, alignment, label, classifier, parameter, loss function, architecture, or supervised vs. unsupervised learning, appear seldom or never. (I think there are two, maybe two-and-a-half chapters that are clear exceptions). 

Of course, there are some stories that engage deeply with Machine Learning as it is actually practiced. My impression is that these stories remain rare overall, and that they have yet to coalesce into their own richly intertextual conversation about Machine Learning. Some promising counterexamples emphasise 'the algorithm' or 'the platform,' rather than AI as such. They find some storytelling space where a new discourse intersects with an old one: where Critical Data Studies meets the old science fictional delight in robots rigorously following rules, and the humans that might get ground up in those unstoppable cogs. However, even in their more critical moments, many such stories are prone to reinforce the political and ethical framings preferred by tech companies. We can speculate why this might be the case. The economic conditions of their production are worth noting — is there a preponderance of storytelling funded by think tanks, academia, tech companies and tech media, perhaps? Or perhaps there is a sort of discursive predisposition at play, related to the amount of energy it takes to speak outside of the established science fiction tropes. Having laboriously disentangled themselves from questions like, “Please may I have an AI girlfriend?” and “Crikey will I get an AI God?”, are these stories too exhausted to escape from questions like, “How can we balance the need for training data at scale with the privacy rights of individuals?” and “How will the widespread adoption of AI and automation impact jobs and the economy”? Such questions may need to be posed in some contexts, certainly. But they also carry deep techno-solutionist and techno-determinist assumptions. Science fiction could do better!

Writing in mid-2023, there are signs that some aspects of this situation may soon shift. A more recent critical collection, Imagining AI: How the World Sees Intelligent Machines (2023), which does solid and timely work in challenging Eurocentrism in literary and cultural AI, does pay a little more attention to Machine Learning. Even if writers have been ignoring Machine Learning, Machine Learning has not been ignoring writers. And now OpenAI’s ChatGPT is creating an unprecedented level of conversation in online writing communities around Machine Learning. Very recently, Science Fiction Writers of America collated on its website over fifty articles and posts written by its members on the topic of using AI in creative work. Prominent science fiction magazine Clarkesworld recently closed to submissions after getting inundated with ChatGPT-generated stories. The window for limiting global heating to 1.5 degrees, agreed in the 2015  Paris Agreement, is more-or-less closing now, and questions are being asked about the carbon cost of computationally intensive Machine Learning (Vicuna is being touted as a lightweight ChatGPT alternative). Hollywood writers are on strike about, among other things, AI. And in the midst of a messy public rivalry between Google and Microsoft, we are witnessing a sort of convergence of discourse about (the social implications of) Machine Learning with older sci-fi tropes: AGI, Singularity, superintelligence, x-risk. 

Whether or not we are at a turning point, it is certainly a moment to take stock of the last decade of science fiction about AI and ask: Is it possible that the few narratives that engage fruitfully with Machine Learning do so despite, rather than because of, the distinctive affordances of the genre? Compared with most other discourses, has science fiction been good at thinking about Machine Learning, okay at it, or maybe especially bad at it?

No comments:

Post a Comment