top of page
Search

Disappearing Cows: Artificial Intelligence and Socio-environmental Harm

  • Writer: Ashley Smyth
    Ashley Smyth
  • Dec 5, 2025
  • 6 min read

I haven’t seen a cow in years. You see cows everywhere in the Texas suburbs and plains (where I’m from), and you see a lot of steakhouses and barbecue spots in general, too. But you see just the meat in cities like Vancouver, where the product is more detached from its source, only sometimes labelled “from BC”. Seeing cuts of beef in Save-On without acknowledging the farms, cows, and farmers is like seeing AI without the data centers, pollution, and labor. AI, though it has the ring of something abstract, has physical realities and tangible harms, especially to equity and the environment. It only makes sense to ask what’s caused this failure of object permanence, what goes on beyond the peripheral, and how we can mitigate those planetary harms. In this non-comprehensive guide, I’ll compile resources that have helped me better understand AI through a socio-environmental lens, in hopes that they can support your own skepticism, too. What is AI?

AI is an umbrella term, and there are so many types.

The broadest labels:

Narrow AI: Performs a specific task.

Artificial General Intelligence (AGI): Hypothetical, the type in sci-fi films that learns and

performs like a human actor. This is also where ethical discussions begin.

Superintelligent AI: Even more hypothetical, smarter than humans, more capable in reasoning and innovating. At this point, humanity’s role in the world is called into question. (Some) function-based labels:

Generative: Produces something upon request, Language Learning Models (LLMs, like ChatGPT) do this with text, mapping based on patterns (not thought or agency).

Reactive: Cannot remember anything, only reacts, like traffic management in smart cities.

Computer vision: Identifies patterns so technology can “see,” the type that helps with medical diagnoses. Limited memory: “Learns” to improve outputs from past data


Theory of Mind: Interprets context and emotional cues, the type that would work as customer support. In pursuit of empathy.


Will AI take over the world?

It’s a debated topic, an “exciting” one. To overpower humans, an AGI (at least) would need to be created, a point also known as the singularity. Some say this will occur by the end of the decade, in 30 years, or never. It’s important to remember who is giving the estimate, and that nobody really knows yet. People like entrepreneurs would benefit from bold estimates of AGI being achieved in the next few years. On the other hand, academics who research ethics might not want to catalyze progress with a soon-to-be-here date and instead first ensure our social and political institutions can handle the development. AI has been in development for a long time, and it has a ways to go from here.

Bottom line: don’t let approaching estimates scare you, and don’t let trends pressure you into joining the bandwagon. A co-founder of OpenAI, for example, has shared predictions of when AI actors will become, and that date has already passed (2021). In my opinion, these flashy ideas can distract from the problems we’re experiencing now, such as AI perpetuating inequity and supporting an unethical, unsustainable supply chain. I will expand on this in the next section.


Environmental harm: AI as an extractive industry


In this section, I repackage Kate Crawford's words, which I think do a good job of capturing the intensive process that goes into AI products, from Chapter 3 of her book Atlas of AI, and embed additional reading. It’s easy to get caught up in what we see in front of us—our place in society—without naturally seeing the people and places that sustain the cost of profit and convenience—we have to look to see it. Beyond abstractions, there’s unsustainable extraction from people’s well-being and mineral resources in the production of the infrastructure needed for AI.

We can examine this idea by looking at the components of the supply chain that contribute to ruin outside our cities. Crawford starts in Silver Peak, Nevada, where lithium (one of the many minerals) mining that goes into AI computing systems is a multi-billion-dollar industry. This figure reflects the millions of years it takes for lithium to form. Sometimes, lithium- or mineral-rich countries sell their resources to fund conflicts; the 2010 Dodd-Frank Act calls these “conflict minerals,” and while the act aims to reform the process to avoid funding conflict, source-tracking can be difficult—at least according to manufacturers that benefit from plausible deniability.

Mining minerals can sometimes involve unethical labor practices. For example, the Indonesian islands of Bangka and Belitung supply companies like Samsung, but also host unregulated gray-market miners without environmental or labor protections. These mines are around the globe, forming a “planetary mine” that operates unconcerned with planetary boundaries. Processing rare earth minerals produces acidic water, or “the waste we want to forget,” so the artificial lake in Batou, Mongolia, contains over 180 million tons of waste powder and mine runoff. Countries willing to take on environmental damage like this do well in the market, as is the case with China.

Once the materials are prepared, distributed, and shipped, each has its own carbon footprint before being implemented in the product. As a product example, materials can be sent to data center processors, which incur significant energy and water costs in their operations and can strain local neighborhood supplies. High water and electricity consumption are indeed significant harms of unregulated AI growth, and the most popular discussion points, but I hope putting this point here in the narrative will expand the discussion.

After the products have run their course, they’re often shipped to e-waste disposal sites, common locations including Ghana and Pakistan, for dumping. Products are replaced as the megamachine system's cycle of many individual factors continues, fueled by large-scale consumption. The term “compute maximalism” sums up the preference for using as much of the available computational resources in AI training as possible due to an acceptance of the economic costs, but this approach ignores environmental considerations, which is concerning as the Information and Communications Technology industry (majorly data centres and communication networks) are set to account for 14% of global emissions by 2040.


But what about innovation?


Blanket statements are rarely helpful; despite the presently problem-ridden value chain, mindful AI development indeed has great potential to benefit humanity and even accelerate climate progress. Here are some things I found most notable that AI could help with:


AI is not evil, but sometimes it is definitely unnecessary, and I’m not just talking about using ChatGPT as a search engine (>100k US homes could be powered with the energy the company’s queries consume annually). Discretion is vital so that we limit harm in development as much as possible while still furthering society. I think Markelius et al. dismantle “AI hype,” an unnecessary use of the technology in business, well. For example, 40% of Gen Z exaggerate their AI knowledge to seem informed because they think employers will like it, which isn’t unreasonable given private-sector FOMO and the integration of AI into practice. After all, competitors are. But the two pillars of this technodeterminism are (1) treating AI like a hammer that can fix everything and (2) making AI skills artificially more sought after, creating a lock-in effect where it’s hard to transition to being less dependent on AI. In other words, people fear they need to be AI experts to keep up in their fields, overestimating its necessity. And it’s not just people. Countries like Germany, France, the US, and China also have AI embedded in their official strategies, but its purpose ranges from efficiency and quality of life to patriotism and maintaining social order. If unchecked, this could lead to harmful data usage and surveillance, increasing government dependence on AI, and reproducing violent power structures for marginalized communities overlooked in AI development.


In conclusion, it’s a trade-off.


And, in the status quo, one I believe isn’t being appropriately weighed. I’m not suggesting we overturn the world’s entrenched economic pedestal for collective well-being tomorrow (though I would vote in favor of this). However, I think addressing how we can reform supply chains in a way that gains political traction is a difficult but necessary priority, given its wide applicability and actionability. Rather than accepting popularized narratives, we would do better to question the relationship between extraction and AI—the cow and the steak.


November 4, 2025

A special thank you to my friends for keeping me grounded in a world of AI hype, to my POLI 344 student-led seminar group last year for introducing me to Crawford and Markelius, and to anybody I’ve talked to about this for prompting discussions that led me to investigate more.


 
 
 

Comments


UBC Environmental Policy Association

  • Instagram
  • Facebook
  • LinkedIn

© 2024 by UBC Environmental Policy Association. Created with Wix.com

bottom of page