The iconic British science fiction series Doctor Who has always told stories warning of the potentially dehumanizing effects of technology, most famously in the form of the Daleks and the armies of Cybermen. The series 11 episode “Kerblam!”, however, offers an especially unsatisfying conclusion about technological systems, one that would have greatly benefitted from some reading into the philosophy of technology – and a dose of critical librarianship.
In the episode, The Doctor (Jodie Whittaker) and her companions receive a call for help hidden in a package sent to her from the massive galactic retailer Kerblam, (clearly the far-future equivalent of Amazon). When they arrive at the massive warehouse on the moon of the planet Kandoka, they learn that the company employs 10,000 human personnel (“organics”) representing a mere 10% of the workforce, which is otherwise mostly robotic. Employees tasked with the repetitive tasks of retrieving, packing and shipping are mysteriously disappearing, and it seems as if the robots are to blame, as they are the ones cheerfully (if creepily) enforcing the company’s rigid performance quotas, and discouraging conversation.
As an Amazon analogue, the Kerblam company hits a bit too close to home: robots and planetary scale aside, there’s little to distinguish fiction from reality. As Jessica Bruder describes in her 2017 book Nomadland, Amazon’s operations depend on the exploitation of vulnerable, aging employees who must work 10-hour days of unforgiving and numbing work pushing carts around cavernous warehouses, often risking injury. Here we see Yaz (Mandip Gill) scouring the aisles with Dan, a father working to support a daughter he hasn’t seen for months, and whom he hopes never has to work at a job like his.
Even so, the episode disappoints in its effort to identify villains. After we learn that a disgruntled low-level employee is planning a massive act of terrorism in order to destroy consumer confidence in Kerblam’s automation, and thus bring down the entire system, we’re reassured that the company isn’t at fault, nor are the managers, the robots or the massive computer system running it all: they just need to add more humans to the mix, and all will be well. The Doctor concludes, “systems aren’t good or bad. It’s how we use them.”
This is highly problematic, even without the context of the soulless corporate globalization (or, rather, galacticization) on display. The Doctor would seem to be advocating an “instrumental” conception of technology, which argues that the merits of a given technology are to be weighed based on its use or ends. Technologies are seen as value-free, and at the service of other dominant values in society. But this is countered by most contemporary philosophers of technology, who argue that, because technologies are goal-oriented – that is, they are designed only (or mainly) for certain purposes and not others (e.g., a lawn mower can’t be used to vacuum a carpet) — therefore the assumptions and goals underlying their development are value-laden. Embedded within and emerging from particular cultural contexts, technologies can’t be separated from those contexts. Because of this, technological systems tend to reinforce and reproduce the values of the society that produce them.
As Simon Barron and Andrew Preater write in their chapter “Critical Systems Librarianship” in the 2018 book The Politics of Theory and the Practice of Critical Librarianship, library technologies are inherently non-neutral and power-laden. Developed, built and sold by large corporations, they have the potential to compromise the privacy of users or facilitate their surveillance (we see both used to control the Kandokan workers and deliver products). Library systems also employ algorithms that may be structured to prioritize results from certain corporate databases over others, while preventing researchers from discovering content related to marginalized communities, such as Muslims or LGBTQ+ individuals. For all these reasons and more, the development and use of technology is not purely an instrumental matter of solving a particular practical problem but must be theorized about in terms of diverse human needs, ethics, ideology and power.
This factor of corporate control makes the notion of neutrality even more specious: technologies always advance particular interests over others, as they are quickly controlled by powerful economic and political actors who use them to their advantage, and who seek to lock other systems into continuing to use these technologies into the future. This tendency led deep ecologist and activist Jerry Mander (author of one of my favourite books, Four Arguments for the Elimination of Television) to observe,
the idea that technology is neutral is itself not neutral – it directly serves the interests of the people who benefit from our inability to see where the juggernaut is headed…[C]omputers…in theory, can empower individuals and small groups and produce a new information democracy. In fact, the issue of who benefits most from computers was already settled when they were invented. Computers, like television, are far more valuable and helpful to the military, to multinational corporations, to international banking, to governments, and to institutions of surveillance and control – all of whom use this technology on a scale and with a speed that are beyond our imaginings – than they ever will be to you and me. Computers have made it possible to instantaneously move staggering amounts of capital, information, and equipment throughout the world, giving unprecedented power to the largest institutions on the earth.
Applying Mander’s observations to this episode of Doctor Who, we see a single corporation operating on a galactic scale, with the ability to deliver packages instantaneously (i.e., far faster than light) to wherever their customers happen to be, suggesting real-time, galactic-scale universal surveillance of everyone. It would also entail a monolithic technological “lock-in” on an equal scale to coordinate logistics, credit, transactions, suppliers and human resources that would put an end to all privacy. The corporation would be so powerful and omnipresent that it would likely devastate independent technological innovation – to say nothing of small businesses — across the galaxy. Merely increasing the staffing levels of humans from 10% to 50% — the episode’s “happy ending” – would only ensure more humans would be subject to the company’s all-consuming homogenization and control.
Obviously, thinking this scenario through to its logical conclusion gets a bit ridiculous, but it shows just how monumentally inadequate The Doctor’s moralizing really is. The goals to which all these technologies would necessarily be oriented – universal surveillance and corporate obeisance – is the use to which they would be put; they could never be made benign. By asserting the neutrality of these arrangements, The Doctor affirms their legitimacy. As Barron and Preater put it, “adopting a position of neutrality reflects a deliberate choice to side with the status quo” (p. 91).