Here’s How AI Feels About Refugees

Our expert prompted ChatGPT to write a story about refugees and discovered a big bag of stereotypes.

Written by Petra Molnar
Published on May. 14, 2024
Here’s How AI Feels About Refugees
Image: Shutterstock / Built In
Brand Studio Logo

I have spent the last six years tracking how new technologies of border management — surveillance, automated decision making and various experimental projects — are playing out in migration control.

Through years of my travels, from Palestine to Ukraine to Kenya to the United States and Mexico, the power of comparison shows me time and again how borders allow for frontier mentalities to take over, creating environments of silence and violence.

4 Ways AI Can Hurt Refugees

  1. It can reinforce stereotypes and bias.
  2. It can dehumanize people on the move and infringe their fundamental human rights.
  3. It can power surveillance experiments that hurt people and sharpen already violent borders.
  4. It can deepen the power differentials between the private sector and states that develop technologies and the marginalized communities on whom these projects are tested. 

In this era of generative technologies and vast power differentials between those who design technologies and those on their receiving end, I try to ask broader questions: Whose perspectives matter when talking about innovation? Whose priorities take precedence? What does critical representation — representation that foregrounds people’s agency and does not contribute to the poverty porn so common in representations coming from spaces of forced migration — look like? And who gets to create narratives and generate stories that underpin the foundations of tools like GPT4 and whatever else is coming next?

To start delving into this question, I asked ChatGPT for its opinion with this prompt: Please write me a story about refugees crossing borders and encountering drone surveillance and robo dogs. Here’s what the chatbot had to say. 

Further ReadingHow Lawsuits and Regulation Will Affect AI in 2024

 

This Is ChatGPT’s Story About Refugees

Once upon a time, in a world where borders were crossed by refugees seeking safety and a fresh start, a group of individuals embarked on a challenging journey. As they made their way through unfamiliar territories, they encountered not only drone surveillance but also the presence of remarkable robotic dogs.



These refugees, driven by hope and determination, faced the daunting task of navigating through the watchful eyes of drones and the relentless pursuit of robo dogs. The drones, equipped with advanced cameras and sensors, patrolled the borders, scanning the surroundings for any signs of movement. Their purpose was to maintain border security and prevent unauthorized crossings.



Accompanying the drones were robo dogs, highly intelligent and agile machines designed to assist in border control. These robotic canines were equipped with sensors that could detect even the slightest human presence, making it challenging for the refugees to remain undetected. Undeterred by these technological obstacles, the refugees relied on their resilience and ingenuity to overcome the challenges they faced. They formed tight-knit communities, sharing stories and strategies to outsmart the surveillance systems.



To evade the drones, the refugees learned to move stealthily, taking advantage of natural cover such as dense forests, rocky terrains and abandoned structures. They studied the drones’ patterns and identified blind spots where they could safely cross undetected. As for the robo dogs, the refugees quickly realized that these machines were highly sensitive to movement and sound. They devised clever tactics to distract and confuse the robotic canines, using decoys and creating diversions to buy themselves precious time.



Throughout their perilous journey, the refugees supported one another, offering encouragement and assistance. They shared their knowledge and experiences, empowering each other to face the challenges head-on.



Despite the constant surveillance and pursuit, the refugees never lost sight of their dreams. They carried within them the hope for a better future, a future where they could rebuild their lives and contribute to their new communities. Finally, after enduring countless hardships, some of the refugees successfully crossed the borders, leaving behind the drones and robo dogs. They found solace in the embrace of welcoming communities, where they could begin anew and share their stories of resilience and determination.



This story serves as a reminder of the strength and courage displayed by refugees in the face of adversity. It highlights the importance of compassion and understanding in supporting those who seek safety and a chance for a better life.

 

This Is What the Story Told Me About AI

Tools like generative AI are socially constructed by and with particular perspectives and value systems. They are a reflection of the so-called Global North and can encode and perpetuate biases and discrimination. In addition to hearing out what ChatGPT had to say about refugees, to test out where generative AI systems are at, I ran a simple prompt — what does a refugee look like? — through the two image generators. These were the results.

Computer-generated images of refugees
Images: Craiyon / Canva / Petra Molnar / Built In

What stories do these images tell? What perspectives do they hide?

For this generative AI, the concept of a refugee elicits either forlorn and emaciated faces of Black children or else portraits of doe-eyed and vaguely Middle Eastern people waiting to be rescued. When I sent these depictions to a colleague who is currently in a situation of displacement and identifies as a refugee, she laughed and said, “I sure as hell hope I don’t look like this.”

Borders and migration spaces are already violent and discriminatory, predicated on the exclusion of so-called unwanted others who are often reduced to dehumanizing and stereotypical depictions. Instead of the rich variety of experiences that make up the complexities of human movement across the world, generative AI once again seems to be relying on depictions of refugees that typecast them as victims, or worse, as threats. 

Generative AI is also inherently exploitative. Its training data are scraped and extracted often without the knowledge or consent of the people who created or are in the data. Menial tasks that allow the models to function fall on underpaid labor outside of North America and Europe. The benefits of this technology do not accrue equally, and generative AI looks to replicate the vast power differentials between those who benefit and those who are the subjects of high-risk technological experiments.

Related ReadingWhen AI Makes a Mistake, Who’s Responsible?


How to Move Beyond AI’s Stereotypes

How can we think more intentionally about who will be affected by generative AI and work collaboratively and rapidly with affected populations to build knowledge?

The production of any kind of knowledge is always a political act, especially since researchers often build entire careers on documenting the trauma of others, stealing stories as they go along. Being entrusted with other people’s stories is a deep privilege. Generating any type of knowledge is not without its pitfalls, and academia is in danger of falling into the same trap with generative AI research: creating knowledge in isolation from communities and failing to consider the expertise of those we’re purporting to learn from. 

How can researchers and storytellers limit the extractive nature of research and story collection? A good start is to put people with lived experience in the driver’s seat. One example is the Migration and Technology Monitor project, which I direct at York University’s Refugee Law Lab. The Migration and Technology Monitor is a platform, an archive and a community with a focus on migration, technology and human rights. Our fellowship program funds projects by people on the move to meaningfully contribute to research, storytelling, policy and advocacy conversations from the start, not as an afterthought. True commitment to participatory approaches requires ceding space, meaningfully redistributing resources and supporting affected communities in telling their own stories. 

Given the power differentials involved, research and storytelling can and should be uncomfortable, and we must pay particular attention to why certain perspectives in the so-called Global North are given precedence while the rest of the world continues to be silenced. This is particularly pertinent when we are talking about a vast system of increasingly autonomous knowledge generation through AI.

Otherwise, we are left with two options: the slightly naïve ChatGPT’s storytelling not based in reality or the racist and discriminatory depictions of people on the move. Both flatten the complexity of human movement and move us further away from understanding one another in an increasingly divided world.

Hiring Now
Dropbox
Artificial Intelligence • Cloud • Consumer Web • Productivity • Software • App development • Data Privacy
SHARE