In our AI-driven world, words take on new meanings.
AI, our new team member, sees the world in a unique way.
Understanding its perspective will contribute to a better human-non-human partnership.
Let’s explore what Affinity Bias means in our new working environment from both sides: the conventional perspective representing human concepts and AI’s take on it, for fostering a collaborative partnership between humans and non-humans.
Affinity Bias Conventional Definition
By conventional definition “Affinity Bias” refers to the emotional side of bias.
It’s the tendency of individuals to favour or feel more positively towards people who are similar to themselves in certain aspects, such as background, interests, or experiences.
This bias can influence decisions related to hiring, promotion, collaboration, or social interactions, as individuals may feel a sense of kinship or connection with those who share similar characteristics.
In the context of the workplace, affinity bias can impact decision-making processes, leading to the preference of individuals who share similar backgrounds or interests in the decision-making process.
This tendency is rooted in human emotions and is driven by our desire as humans to feel what we consider to be positive emotions.
Emotions are processes that show up in our behaviours, a repeatable pattern that AI can spot, learn and repeat.
What is AI’s perspective on Affinity Bias?
From an AI’s perspective, affinity bias refers to a tendency the system may develop towards certain groups or patterns based on the data it’s trained on.
This bias can occur if the training data unintentionally emphasizes or lacks representation of specific attributes, leading the AI to favour certain groups over others, amplifying our behaviour.
Simply, its AI picks up preferences or biases from the information it learns from the team.
This bias could affect how the AI makes decisions or suggestions, potentially impacting fairness and inclusivity in its outcomes.
The ways we can forward human Affinity Bias to AI:
Training Data Influence:
Humans curate and provide the training data that AI learns from.
If humans exhibit affinity bias in their decisions, actions, or historical data, the AI system inadvertently learns and perpetuates these biases.
Implicit Biases in Decision-Making:
Human decision-makers, whether consciously or unconsciously, may carry biases influenced by affinity preferences.
If these decisions are used as training examples, the AI system will replicate these biases when making similar decisions.
User Interactions and Feedback:
Human users interacting with AI systems can unknowingly introduce biases.
For example, if users consistently favour certain types of content or profiles, the AI, driven by user feedback, might reinforce these patterns, further perpetuating affinity biases.
Representation in Training Data:
The composition of the training data, influenced by human choices, may favour certain groups over others.
If specific demographics are underrepresented or misrepresented, the AI might develop biased associations and preferences.
Language and Terminology:
Humans contribute to the language and terminology used in training data.
If certain terms or phrases carry implicit biases, the AI, lacking emotional understanding, may still incorporate and perpetuate these biases in its cognitive processes.
Organizational Culture Impact:
The prevailing culture within an organization, driven by human decisions and behaviours, can shape the training data and preferences of AI systems.
If organizational culture exhibits bias, it may inadvertently transfer to AI processes.
Even though AI lacks emotions, it learns from human-generated data. Human decisions, interactions, and data choices shape AI systems and forward potential biases.
My Thoughts:
When we discuss Affinity bias, it’s important to note that biases are a natural part of being human, whether we realize it or not.
Throughout history, the variety of ideas and behaviours among people has helped to balance out biases and handle them effectively.
However, with the introduction of AI, we now have a new team member—one who isn’t aware of human biases. AI learns solely from the data it’s given, without understanding its decisions’ broader context or implications.
As a result, it’s our responsibility as humans to acknowledge and recognize our biases and ensure they don’t influence AI’s decision-making process.
By understanding bias, we can create rules and guidelines to help AI and humans make fairer choices.
Ultimately, it’s up to humans to lead the way in identifying and correcting biased behaviour, especially when working alongside AI.
We must address biases to promote fairness and inclusivity in decision-making actively.