A proposal

Indigenous Protocols and A.I. Workshop, March 2019

University of Hawaii at Mānoa in Honolulu, HI 

Megan Kelleher –  Vice Chancellor’s Indigenous Pre-Doctoral Fellow | RMIT University | School of Media and Communications | Melbourne | Australia

Response to Key Workshop Question:

  1. From an Indigenous perspective, what should our relationship with A.I. be?

Subquestions:

  1. How can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and A.I.?
  2. How do we broaden discussions regarding the role of technology in society beyond the largely culturally homogenous research labs and Silicon Valley startup culture?
  3. How do we imagine a future with A.I. that contributes to the flourishing of all humans and non-humans?

The effects of technologies are dependent upon the ways they are used and designed, and are therefore a product of human actions and decisions. Dr Melvin Kranzberg, Professor in the History of Technology at Georgia Institute of Technology famously wrote that “technology is neither good nor bad; nor is it neutral” (1986, p. 545). Kranzberg argued that technology exists within a social ecology and ‘interacts in different ways with different values and institutions’ (p. 548); that ‘the same technology can have quite different results when introduced into different contexts or under different circumstances’ (pp. 545-6). It follows, then, that the ways a technology is perceived within a sociocultural milieu will also impact the interactions that technology will have with that society. How then, might an ontological understanding of the nature of a technology impact the way that society uses and is affected by a technology?

Despite the current hype around AI, its capacity and its potential, scholars in the computer and cognitive sciences agree that computers, robots and machines cannot “think” (Ganascia 2018), and moreover, are “far from intelligent” (Schaffer 2018). At least for now. Yet, while ‘Weak AI or Artificial Narrow Intelligence (ANI) is the only form of AI that humanity has achieved so far[1]’ (UNESCO Courier 2018, p. 41), it’s interesting to note that ‘AI was initially intended to simulate each of the different faculties of intelligence – human, animal, plant, social or phylogenetic – using machines’ (Ganascia 2018, p. 7). This “original intent” resonates with the reasoning of three Indigenous thinkers – Karina Kesserwan, Dr Christine Black, and Ambelin Kwaymullina – who all contend that lifeforms – or animate beings – in Indigenous cultures need not be human to have intelligence, and what’s more – spirit. Kesserwan (2018) explores the notion of AI as a living being with a soul. What she finds is that for Indigenous peoples, the concept is not unfathomable. She draws from a sci-fi novel written from a Curve Lake First Nation author’s perspective about a newly formed AI, ‘Many Aboriginal cultures believe that all things are alive. That everything on this planet has a spirit’ (Drew Hayden Taylor 2016, in Kesserwan 2018).

Kesserwan points to a blog post by Palyku academic Ambelin Kwaymullina, who writes:

Indigenous systems generally do not contain a hard and fast distinction between the natural (in terms of that which is part of, or created by, nature) and the artificial (in terms of that which is not). This distinction is itself a reductive binary, and Indigenous knowledge-ways are holistic in nature (2017).

We humans are not separate from nature, nor do we sit above it – having dominion over all things in nature; we are one part of a complex cosmology, of which AI is now also a part, particularly when it is tasked with making decisions within that cosmology.

Kombumerri/Munaljahlai jurisprudence scholar, Christine Black, argues that Indigenous people might view artificial intelligence as a “being” or “something that we are inside of” (Black 2018). Further, Black contends that the notion that there can exist a non-human decision-making system that knows us, possibly better than we know ourselves, is familiar to Indigenous peoples. Black defines Indigenous jurisprudence as derived from patterns of law that rest in the land, whereby a sacred and dynamic relationship between people and the non-human (land, animals, physis) shapes how people carry out their responsibilities and gain rights (Black 2011).

From that perspective, then, what might we consider our relationship and responsibilities to AI? Are we custodians? Subjects? Dependents? Kwaymullina writes ‘[the] fact that a lifeform is not human doesn’t mean they are not my also brother, sister, mother, father, grandmother, or grandfather’ (2017). Perhaps we are already all of these things to AI, and as it evolves and develops, our relationship to it will change. For Black, finding ways to know our responsibilities and obligations in relation to a law in flux is a productive starting point for how to approach artificial intelligence (2011).

Indeed, we need to move beyond superficial ideas of what Indigenous Knowledge (IK) can contribute to science and technology (Popp 2018); beyond what AI can do to solve Indigenous issues (Batstone 2017) and turn toward an exploration of the interface between the systems to understand – in this case – how AI might be improved when an Indigenous standpoint underpins the design ethos. ‘Indigenous systems generally do not contain a hierarchy that privileges human life above all other life’ (Kwaymullina, 2017). What about privileging one worldview over others? When designing AI systems, what might be the consequences of embedding one set of values into a system to the exclusion of others? Whose values are we automating? What are the risks of excluding non-dominant worldviews and values? If data is structured, it is structured according to a worldview. Whose worldview? And therefore, does data equal truth? Whose truth? (Genevieve Bell, 2018).

The extent to which people benefit from AI depends in part on access to digital technologies. Access may be limited by factors such as cost, infrastructure gaps, or a lack of willingness to participate. Digital inclusion is a complex issue, in that internet use can vary according to the social norms and choices of particular groups regardless of available infrastructure. The Australian Digital Inclusion Index (ADII) suggests that Aboriginal and Torres Strait Islander people are accessing the internet less than the population as a whole. Another subtle outcome of digital inclusion is that as more people use the internet, the more likely it is that services will move online. Those who remain without internet access (or with intermittent access) will experience greater difficulties as face-to-face services are removed or reduced (Thomas et al. 2018). These factors will all impact what data can be captured, the way that data is gathered, and ultimately what systems are informed by that data.

Eubanks discusses the real threat of a deepening social inequality that is set to occur when automated systems are built upon biased and often discriminatory data sets (2017). Similarly, Timnit Gibru (Snow 2018) advocates for the urgent need for diversity in AI, from a technical perspective and from a research personnel perspective. Gibru argues for inclusive and diversified data sets, but admits that no data set can perfectly sample the whole world. That being the case, how is it possible to ‘imagine a future with A.I. that contributes to the flourishing of all humans and non-humans?’ Arguably, it isn’t possible, until the problem of diversity in AI is suitably resolved. All the more so when data is retrospective and details what has already been, along with the values of the context by which the data was gathered (Bell cited in Holcombe-James 2018). ‘Contexts matter, situations matter, what the data was intended to do and what it now does requires interrogation’ (Ibid. 2018).

Returning to Kranzberg’s First Law, technology is neither good, nor bad, nor neutral. Its effects are a product of human choices and uses. It exists within a social ecology and interacts differently with different values and institutions. Results are context dependent; informed by worldviews, values and ontologies. As Indigenous peoples, for our relationships with any AI to be harmonious, AI systems must be decolonized. Indigenous peoples must be at the AI design table, bringing our ontologies and cultural values to the creation of the systems that inform the decisions of an AI – particularly when it is being unleashed into a world in which we are a part.

References

Batstone, Joanna. (2017). “Can Artificial Intelligence help close the indigenous healthcare gap?” The Weekend Australian, 24 April, https://www.theaustralian.com.au/business/technology/opinion/can-artificial-intelligence-help-close-the-indigenous-healthcare-gap/news-story/f384bde92c520e59d98413f21a91a55f, Accessed 13 Sep 2018.

Bell, Genevieve. (2018). “Automating trust?” Keynote presented at Trust and its discontents: an Australian Academy of the Humanities Workshop, 26 Sep 2018, RMIT University.

Black, Christine F. (2018). Thinking about Artificial Intelligence through an Indigenous Jurisprudential Lens. Seminar presented at the Melbourne School of Government, 24 July, Melbourne University.

Black, Christine F. (2011). The Land is the Source of the Law: A Dialogic Encounter with Indigenous Jurisprudence. London and New York: Routledge.

Eubanks, Virginia. (2017). Automating Inequality: How high-tech tools profile, police, and punish the poor. New York: St Martin’s Press.

Ganascia, Jean-Gabriel. (2018). “Artificial Intelligence: Between myth and reality”. The UNESCO Courier: Artificial Intelligence – The Promises and the threats, July-September 2018, Issue 3. pp 7-9, http://en.unesco.kz/the-unesco-courier-2018-3-artificial-intelligence-the-promises-and-the-threats, Accessed 17 Sep 2018.

Holcombe-James, Indigo. (2018). Contexts matter, situations matter, what that data was… @Indigo_H_J, Twitter, 26 Sep 2018, viewed 27 Sep 2018, <https://twitter.com/Indigo_H_J /status/1044739401933185025?s=19>.

Kesserwan, Karina. (2018). “Indigenous conceptions of what is human, of what has a spirit and what doesn’t, offer a different way of considering AI — and how we relate to each other.” Policy Options. 16 February, http://policyoptions.irpp.org/magazines/february-2018/how-can-indigenous-knowledge-shape-our-view-of-ai/, Accessed 13 Sep 2018.

Kranzberg, Melvin. (1986). “Technology and History: ‘Kranzberg’s Laws.’” Technology and Culture, vol. 27, no. 3, pp. 544–560. JSTOR, JSTOR, http://www.jstor.org/stable/3105385.

Kwaymullina, Ambelin. (2017). “Reflecting on Indigenous Worlds, Indigenous Futurisms and Artificial Intelligence.” 16 Sep, Mother of Invention: A Twelfth Planet Press Anthology.  <http://motherofinvention.twelfthplanetpress.com/2017/09/16/reflecting-on-indigenous-worlds-indigenous-futurisms-and-artificial-intelligence/&gt;. Accessed 15 Sep 2018.

Popp, Jesse. (2018). “How Indigenous knowledge advances modern science and technology.” The Conversation, 3 January, https://theconversation.com/how-indigenous-knowledge-advances-modern-science-and-technology-89351, Accessed 13 Sep 2018.

Schaffer, Amanda. (2018). “Boosting AI’s IQ.” MIT Technology Review. June 27, 2018. https://www.technologyreview.com/s/611229/boosting-ais-iq/, Accessed 15 Sep 2018.

Snow, Jackie. (2018). ‘“We’re in a diversity crisis”: cofounder of Black in AI on what’s poisoning algorithms in our lives’. MIT Technology Review. 14 Feb 2018. <https://www.technologyreview.com/s/610192/were-in-a-diversity-crisis-black-in-ais-founder-on-whats-poisoning-the-algorithms-in-our/?utm_campaign=add_this&utm_source=email&utm_medium=post&gt;. Accessed 19 Jun 2018.

The UNESCO Courier: Artificial Intelligence – The Promises and the threats, July-September 2018, Issue 3. http://en.unesco.kz/the-unesco-courier-2018-3-artificial-intelligence-the-promises-and-the-threats, Accessed 17 Sep 2018.

Thomas, J., Barraket, J., Wilson, C. K., Ewing, S., MacDonald T., Tucker, J., & Rennie, E. (2017). Measuring Australia’s digital divide: The Australian digital inclusion index 2017. Melbourne: RMIT University for Telstra.


[1] Weak AI or Artificial Narrow Intelligence (ANI) is defined as machines that are capable of performing certain precise tasks autonomously but without consciousness, within a framework defined by humans and following decisions taken by humans alone. (‘A Lexicon for Artificial Intelligence’, The UNESCO Courier, p. 41).

Leave a comment