This essay uses a post-phenomenological approach based on I-technology-world relations to analyse gender biases in technologies. The focus of this work is on feminised personal assistants like Alexa, Siri, and Cortana. These systems, designed to mimic human interaction, often reinforce stereotypes by associating femininity with obedience and care. This perpetuation of stereotypes poses a dual problem: it reinforces existing biases in human cognition and contributes to their normalisation through daily technological interactions.
This paper argues that gender biases in AI systems are a reflection of “culture in mind,” a concept drawn from Perry Hinton’s work on implicit stereotypes and predictive brain theory. Moreover, I propose that AI technologies hold the potential to challenge and change cultural stereotypes. Culture, as an environment in which we all are submerged, changes over time and is very flexible. Technology, as an integral part of our modern culture, influences our perceptions and behaviours; by intentionally redesigning the technological environment with non-gendered designs and associative training, we can effortlessly challenge stereotypes and actively promote gender equality.
Hinton on “Culture in Mind”
Are biases solely the mind’s fault? Hinton would say no. He argues that such a view omits to take into account how culture influences our cognition. Through the concept of the predictive brain, he demonstrates that implicit stereotypes are embedded within our ordinary perceptual mechanisms (Hinton 2017, 1). Implicit stereotypes enable individuals to function pragmatically within their cultural context. By reducing cognitive load, stereotypes help the brain navigate complex social and cultural landscapes efficiently, enabling faster decision-making in familiar contexts. Such a reconceptualization helps us understand that biases are not solely the responsibility of individuals but are also profoundly shaped by the environment in which they exist.
Although we may explicitly reject discriminatory beliefs, our brains can still harbour implicit biases that contradict our conscious values (Olsson 2024, 22). These biases are shaped by societal and cultural influences, and despite our conscious efforts to overcome them, they often persist beneath the surface. For example, we might consciously oppose racism or sexism but unconsciously hold stereotypes that affect our judgments and actions. This dissonance between our explicit beliefs and implicit biases reveals the complexity of human cognition and the challenges of truly eradicating prejudice from our behaviour.
The notion of implicit stereotypes refers to a fixed set of attributes that are associated with a particular social group (Hinton 2017, 1). This concept is grounded in two theoretical frameworks: (1) associative networks in semantic memory and (2) automatic activation. For instance, in a culture where writing is commonly done with a pen, the concept of “pen” is readily activated when the concept of “paper” is encountered. This connection is reinforced by the frequent pairing of these objects in daily use, embedding their association in memory (Hinton 2017, 4). While this example is straightforward, the implications become more significant when considering the associations between gender and cognitive traits.
In societies where women predominantly occupy caregiving professions—such as nurses, teachers, or maids—memory tends to associate women with traits like “caring,” “verbally skilled,” or “multitasking.” Conversely, in contexts where men are more likely to work in managerial roles, memory links men with traits like “logical thinking” or “leadership skills.” These culturally embedded patterns highlight how implicit stereotypes are reinforced through the associative processes of memory.
Perception itself is a form of prediction aimed at minimising discrepancies between expectations and actual experiences (Clark 2013, 181). This perspective suggests that no mind is entirely unbiased; even individuals who consider themselves free of prejudice possess implicit stereotypes embedded in their semantic memory (Hinton 2017, 2). In other words, the predictive brain allows us to reconceptualise cognitive bias—not as a rigid belief about a particular social group, but as a perceptual mechanism that reduces surprise, thereby facilitating communication, understanding, and effective functioning within a social environment.
To explain the cultural conditioning of implicit stereotypes in human cognition, Hinton introduces the concept of “culture in mind.” This notion highlights that the human mind is never an entirely intrinsic faculty but is always shaped and supported by its environment. Within this framework, implicit stereotypes can be understood as a form of “cultural knowledge” (Hinton 2017, 6), emphasising their fluid and adaptable nature. Culture, in this sense, is not a static entity existing “out there” but an active, ongoing process of social construction in which individuals actively participate.
Addressing stereotypisation, therefore, requires shifting focus away from individual responsibilisation to broader strategies of meaning-making within diverse social and cultural networks. These strategies—such as media representation—play a crucial role in shaping and communicating cultural associations. Over time, repeated exposure to alternative representations can probabilistically reconfigure the associative networks in memory, thereby reducing the persistence of stereotypes (Hinton 2017, 7).
Gender-biased Technology
Most people find it hard to believe that technology can be gender-biased. We tend to think of computers, particularly their software, as neutral and impartial. Moreover, we, as humans, tend to trust computers more when they mimic human interactions (Ruijten 2018, 12; Pitardi 2021, 638). A notable example is NLP (Natural Language Processing), which analyses data and seeks to engage in conversations with humans in a human-like manner.
However, upon closer examination, these interactions are not always as inclusive or neutral for certain groups as they ought to be. Instances of gender bias are evident in tools such as Google Translate. For example, for English “nurse” there are two forms in Russian a male nurse “медбрат” and a female nurse “медсестра”. While the translator could have shown both options, it does not. When we translate “nurse” we only get a female version. While searching for “firefighter”, which also has both male and female forms in Russian, we get only a male version. Additionally, algorithms sometimes auto-correct text from female to male forms. As a result, female users are often addressed as though they were male, leading them to become accustomed to this form of interaction (Wellner 2020, 192).
Some researchers explain this by highlighting that datasets form the foundation upon which algorithms assess which words are statistically likely to appear alongside others in a sentence. A survey conducted by Caliskan 2017 demonstrated that words such as “woman” or “girl” are more frequently associated with terms related to family, whereas “man” is more commonly linked to career-related words. As we know, computers reduce meaning to statistical proximity, which enables algorithms to “understand” text in a certain way. However, this statistical approach creates a feedback loop: gender-biased outcomes are fed back into the system, thereby reinforcing them and creating a loop of biases (Wellner 2020, 129). Over time, this cyclical process entrenches stereotypes within the algorithm, causing them to perpetuate gendered assumptions that reflect the biases present in the original dataset.
This issue becomes particularly problematic because of the scale and influence of AI systems. When algorithms trained on biased datasets are deployed in widely used tools—such as job recruitment platforms, customer service bots, or automated translation services—they can subtly yet significantly shape societal perceptions and behaviours. Additionally, the problem is compounded by the opacity of machine learning systems. This “black box” nature of AI makes it difficult to detect and correct biases, leaving them embedded and unchallenged.
AI Like Humans
In short, the transfer of human cognitive biases onto technology stems from the assumption that AI should mimic human interaction as precisely as possible. This presupposition becomes particularly evident in the case of personal assistants such as Siri, Alexa, Cortana, or Google Home. Notably, most companies (e.g., Amazon and Apple) assign these assistants female names, and their voices are programmed to sound like women. This reflects ingrained cognitive and affective assumptions about women, such as subservience, obedience, and caregiving.
Taylor Walker argues that the creators of Alexa—the only personal assistant on the market unable to switch to a male voice—made a conscious decision to design it with a female identity. This decision, according to Walker, draws on the historical association of women with domestic labour in the home (Walker 2020, 1). Alexa, as an AI, represents a digital reconstruction of a woman within a familiar gendered narrative, thereby perpetuating harmful stereotypes and deepening existing biases. Listening to an obedient female voice that performs digital domestic tasks without resistance reinforces implicit stereotypes about women.
Why Is It Not Ok?
The defence often invoked by AI creators to absolve themselves of responsibility for perpetuating such stereotypes is rooted in the mimicking argument: if human interaction is inherently biased, why should we expect technology to be free of those biases? This argument, however, rests on a thin and problematic foundation. While technology reflects culture, it also possesses the potential to shape it, making it crucial to critically assess how AI systems influence societal norms.
In this context, Hinton’s concept of “culture in mind” can be adapted to a framework of “technology in mind,” recognising technology as an integral part of culture. Technologies are deeply embedded in our daily lives and actively shape our semantic memory, reinforcing the implicit stereotypes ingrained in human cognition. However, this influence can also be harnessed for positive change: by altering the associations and narratives presented through technology, we can challenge and counteract existing biases.
Post-phenomenology of Technology
Post-phenomenological studies examine technologies in terms of the relationships they mediate between humans and technological artefacts. This perspective focuses on the ways technologies shape our interactions with the world and with ourselves. It highlights that we perceive the world through the lens of technology while simultaneously reflecting upon our own identities through our technological engagements.
Unlike traditional views that approach technologies as purely functional or instrumental objects, post-phenomenology emphasises their role as mediators. Technologies actively influence human experiences and practices, fundamentally shaping how we live and perceive the world. Recognising this, we can better understand the significant impact technology has on our everyday lives and the need to critically engage with it as a cultural force. Applying this framework of the I-technology-world relationship to personal assistants like Alexa reveals several interesting dynamics. Alexa, for instance, can be seen as an extension of its owner, part of their embodied experience.
Who Is Responsible?
Regardless of the reasons behind biased outcomes in technologies, such outcomes are often claimed to be beyond the control of developers. Creators of these technologies frequently shift responsibility onto the algorithm, arguing that it is merely the result of statistical processes. Another common explanation is that AI systems are designed to mirror the world, and since the world is rife with biases and prejudices, it is only natural for AI to reflect them. However, ignoring the problem is not a legitimate solution; rather, it constitutes a defensive strategy to evade accountability.
By applying the theoretical framework of post-phenomenology of technology, we can argue that technologies, as non-neutral mediators of our relationships with the world, ourselves, and others, possess a distinct form of technological intentionality. This intentionality renders them morally significant agents within these interactions.
Following the ideas of Ihde and Verbeek, Wellner contends that when technologies mediate the relationship between humans and reality, they help to construct not only the objects that are experienced but also the subject experiencing them. This mediating role gives technologies a constitutive agency, which goes beyond merely copying human actions—it actively constructs a worldview that often centres on male-dominated perspectives (Wellner 2020, 146). This constructive agency is precisely what allows us to ascribe moral responsibility to technologies and, by extension, to their creators.
Can We Break the Loop?
This moral responsibility cannot remain abstract or theoretical; it must be actively claimed and implemented by the developers and designers of these systems. As suggested in the second chapter, and drawing on Hinton’s modified concept of “technology in mind,” one productive way to enact this responsibility is to change the associations embedded within our semantic memory. Technologies have the potential not only to perpetuate existing biases but also to challenge and reshape the cognitive frameworks through which we understand the world. By deliberately altering these associations, developers can create systems that counteract harmful stereotypes and promote more equitable narratives.
If our perception functions as a mechanism for prediction and minimising surprise, as the concept of the predictive brain suggests, then the higher the frequency of certain experiences, the greater the likelihood of forming and reinforcing associations based on them. These associations are then stored in our semantic memory. For instance, if we were to encounter more gender-neutral bots that employ voices free of gendered characteristics, we could reduce the tendency to associate qualities like service or obedience with any specific gender.
This shift would not only influence our perceptions but also affect the programming of the technology itself. Technologies that mimic human interaction could begin to represent a genderless image of robots, reflecting this neutral perspective in their design and dataset. Such an approach would reshape the associative networks that underpin their actions and meanings. Over time, this performative shift could contribute to altering societal perceptions of gendered cognitive and behavioural skills, encouraging more equitable views.
The moral responsibility of technologies thus involves two interrelated aspects:
- Gender-neutral design
- Reformulated associative training
By addressing these aspects, we can begin to reframe the role of technology from perpetuating stereotypes to actively challenging them. Technologies could then serve as tools for combating ingrained biases and promoting more inclusive societal narratives.
In this way, technologies not only hold the potential to reflect cultural changes but also to drive them. By deliberately designing systems that move away from stereotypical representations, we can ensure that their role in human interactions becomes a force for positive change, promoting greater equity in how we think about and relate to one another.
Since technology is such a powerful tool, and cultural biases have the potential to shift when the environment changes, technological artefacts hold significant potential to transform from being perceived as our enemies to becoming our allies. This transformation can occur if AI creators deliberately avoid relying on biased global statistics that reinforce implicit prejudices and instead replace them with frameworks that promote equal opportunities.
One proposed solution to the issue of biased translations could involve presenting both male and female forms of a word simultaneously, separated by a slash symbol, allowing individuals the freedom to choose. This approach would help normalise the idea that any profession can be undertaken by both men and women, challenging the notion that women should always occupy the position of “the other.”
Bibliography
Brey, P. (2009). Human enhancement and personal identity. In J. K. B. Olsen, E. Selinger, & S. Riis (Eds.), New waves in philosophy of technology (pp. 169–185). Palgrave Macmillan. https://doi.org/10.1057/9780230227279_9
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Clark, A., & Chalmers, D. J. (1998). The extended mind. Analysis, 58(1), 7–19. http://www.jstor.org/stable/3328150
Heidegger, M. (1962). Being and time (J. Macquarrie & E. Robinson, Trans.). Basil Blackwell. (Original work published 1927)
Hinton, P. (2017). Implicit stereotypes and the predictive brain: Cognition and culture in “biased” person perception. Palgrave Communications, 3, Article 17086. https://doi.org/10.1057/palcomms.2017.86
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.
Ihde, D. (2009). Postphenomenology and technoscience: The Peking University lectures. State University of New York Press.
Olsson, F. (2024). Culture and implicit cognition: On the preconscious nature of prejudice and nationalism (PhD dissertation, Department of Sociology, Stockholm University). Retrieved from https://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-233806
Pitardi, V., & Marriott, H. R. (2021). Alexa, she’s not human but… unveiling the drivers of consumers’ trust in voice-based artificial intelligence. Psychology & Marketing, 38(4), 626–642. https://doi.org/10.1002/mar.21457
Ruijten, P. A. M., Terken, J. M. B., & Chandramouli, S. N. (2018). Enhancing trust in autonomous vehicles through intelligent user interfaces that mimic human behaviour. Multimodal Technologies and Interaction, 2(4), Article 62. https://doi.org/10.3390/mti2040062
Walker, T. (2020). “Alexa, are you a feminist?”: Virtual assistants doing gender and what that means for the world. The IJournal: Student Journal of the Faculty of Information, 6(1), 1–16. https://doi.org/10.33137/ijournal.v6i1.35264
Wellner, G., & Rothman, T. (2020). Feminist AI: Can we expect our AI systems to become feminist? Philosophy & Technology, 33(2), 191–205. https://doi.org/10.1007/s13347-019-00352-z
Wellner, G. P. (2020). When AI is gender-biased: The effects of biased AI on the everyday experiences of women. Journal of Philosophical Studies, 13(37), 127–150. Humana Mente.