CFP: Intersectional Automations (Edited Collection) ABSTRACT DEADLINE 1 April 19

Nathan Rambukkana Announcement
Location
Ontario, Canada
Subject Fields
Cultural History / Studies, Race Studies, Sexuality Studies, Women's & Gender History / Studies, Communication

(with apologies for crossposting)

 

Intersectional Automations: Robotics, AI, Algorithms, and Equity 

Edited Collection (Abstracts Due 1 April 2019)

 

This collection will explore a range of situations where robotics, biotechnological enhancement, artificial intelligence (AI), and algorithmic culture collide with intersectional social justice issues, such as race, class, gender, sexuality, ability and citizenship.

Some call it the 4th industrial revolution (Brinded, 2016; Kaplan, 2015). Robots, AI, and algorithms have grown from their early uptake in some industries (such as robots in manufacturing) to an accelerating presence in other spheres ranging from customer service roles (for example, reception, check-outs, food service, driving) to professional and creative roles previously unheard-of and un-thought-of (for example, expert legal and medical systems, automated journalism, musical and artistic production (Kaplan, 2016; Ramalho, 2017; Hirsch, 2017)). The World Economic Forum warns that “this will lead to a net loss of over 5 million jobs in 15 major developed and emerging economies by 2020” (Brinded, 2016), a serious challenge to ethical labour practices, and potential looming crisis leading some to consider alternative societal models—such as Universal Basic Income (Frase, 2016), or a robot tax (Walker, 2017)—to compensate.

Meanwhile, there is marked evidence that robots, AI, biotechnology, and algorithms are becoming in general and over-top of employment roles more integrated in human societies. Human-machine communication (HMC) has moved from an important yet somewhat-marginal field to lodge itself at the centre of societal workings and visions for the future. From autonomous vehicles (Bowles, 2016), to the algorithmic filtering of search results (Noble, 2018) and social media content (Gillespie, 2018), from online harassment and political boosterism via bots (Dewey, 2016; Woolley, Shorey, & Howard, 2018), to sex robots (Levy, 2007; Danaher & MacArthur, 2017), from ubiquitous AI assistants in our homes and smart devices (Guzman, 2019), to wearable tech that tracks and shares our biometric data (Forlano, 2019) and/or extends our biological capacities (Brooks, 2003; Jones, 2019), such technologies are rapidly mapping themselves onto almost every conceivable realm of human experience.

And yet, there is mounting evidence that the creation and programming of robots, AI, and algorithms, being artifacts of human culture, do not escape that context, sometimes carrying into their computational logics, platforms and/or embodiments stereotypes, biases, exclusions, and other forms of privilege. One can think of True Companion’s Roxxxy sex robots that some argue have personality options based on racist and sexist stereotypes of womenhood, for example the Barely-18 “Young Yoko” and resistant “Frigid Farah” that, as Gildea and Richardson (2017) note, seem to fetishize underage girls and sexual assault. Or you could think of the abandoned Amazon HR algorithm which, after being fed years of resumes and hiring decisions, used computational logic to identify traits that that were historically associated with Amazon hiring decisions, with the view of automating part of the hiring process, and encoded a preexisting sexism from the HR data that showed that applicants with work experience or activities that included the word “Women’s,” or who were educated at all-women colleges, were often not hired (Jones, 2018). Finally, one could contemplate how polities using data aggregation and predictive algorithms to manage and make decisions about social programs, resource allocation, or policing can end up targeting and profiling  poor or racialized populations, with occasionally terrifying results—such as any mistake on an online application being interpreted by an automated system as “failure to cooperate” (Eubanks, 2017).

This edited collection will draw an analytical circle around these interconnected and adjacent issues, lending a critical eye to what is at stake due to the automation of aspects of culture. How do equity issues intersect with these fields? Are the pronouncements always already dire, or are there also lines of flight towards more equitable futures in which agentic artefacts and extensions can play an active part? Chapters may address one or multiple equity issues, and submissions that address emergent intersections between them will be given special consideration. 

Proposed chapters may address topics such as, but not limited to:

– Algorithmic classism, ableism, racism, and sexism, including issues surrounding content moderation on social media (e.g., Gillespie, 2018), redlining/weblining (e.g., Eubanks, 2017), business (e.g., Jones, 2018), big data (e.g., Ferguson, 2017), or military practices such Google’s controversial Project Maven (e.g., Holt, 2018). Algorithm use to address the same issues.

– Issues around robotic labour and poverty, Universal Basic Income, robotic utopias/dystopias (e.g., Frase, 2016, Kaplan, 2016).

– Issues around the use of deadly autonomous or semi-autonomous robots by the military or non-state actors, such as work surrounding the Campaign Against Killer Robots (e.g., Anderson & Waxman, 2012; Crootof, 2015; Gregory, 2011; Karppi, Bolen, & Granata, 2016).

– Issues surrounding sex-robotics, teledildonics, VR, and AI sexuality, including stereotypical, and sexist sex robotic “personalities” and embodiments (e.g., Gildea & Richardson, 2017); sex robots based on real people without consent (e.g., Gee, 2017); The Campaign Against Sex Robots (e.g., Richardson, 2015; Danaher, Earp, & Sandberg, 2017); teledildonic / VR stream hacking and consent (e.g., Rambukkana & Gautier, 2017; Belamire, 2017); the interplay between robotic brothels and sex worker rights and protests (e.g., Morrish, 2017; Trayner, 2017; Danaher, Earp, & Sandberg, 2017); bots masquerading as real people on dating sites (e.g., Light, 2016; Karppi, 2018); deepfakes and pornography (e.g., Maras & Alexandrou, 2018). Progressive steps to address such issues or to create equitable sexual futures.   

– The politics and ethics of the singularity (e.g., Korb & Nicholson, 2012) and the future status of robotic and AI workers with respect to labour, citizenship, and human rights—for example, work on Hansen Robotics’ Sophia as Saudi citizen (e.g., Weller, 2017), robotic servitude (e.g., Green, 2016), as well as the rights of humans interacting with AI (e.g., Shepherd, 2019).

– Assumptions, representations, and discourse surrounding dis/ability and human augmentics, including “supercrip” and “cyborg” discourses and the potential tensions between feminist technoscience (e.g., Haraway, 1990) and critical disability studies (e.g., Allan, 2016; Cascais, 2013).

– How any of these or other issues are depicted in popular or fringe fictions that contain robotic or AI characters (for example, Humans, Neuromancer, Extant, Westworld, Her, Blade Runner, Ex Machina, Ghost in the Shell, Altered Carbon, Black Mirror, Speak, Neon Genesis Evangelion, Questionable Content,etc.)

My goal is to assemble a collection of exemplary abstracts and then approach some top-tier academic publishers with relevant series.

If interested, please send a 750-word abstract, collection of keywords, and a 150-word bio to the editor, Dr. Nathan Rambukkana (n_rambukkana@complexsingularities.net), by 1 April 2019. Drafts will be due 1 October 2019 and final versions 1 April 2020. Please also email Nathan at the above address if you have any questions and feel free to repost this CFP to your networks. 

Contact Information
Dr. Nathan Rambukkana
Assistant Professor, 
Communication Studies
Wilfrid Laurier University, Waterloo DAWB 3-136 75 University Ave W
Waterloo ON Canada N2L3C5
Contact Email
n_rambukkana@complexsingularities.net