Look under the AI hood before purchase

Generative AI has immense potential to change our world for the better. But, this isn’t necessarily an endorsement. It’s 100% a disclaimer given what I’m about to write.

In 2019, I helped plan and execute an event focused on the energy transition. The idea was to support high-level people to actually move things forward via workshops and discussions - including 5 heads of state and ministers.

In one particular workshop, an attendee offered uninterestedly, “let’s just make an app.” He also threw out the concept of “blockchain.” There was no thoughtful follow-up of what that would entail and no actual connection to the problem. It also distracted from the solutions others were offering in the room. He wasn’t the first who’d said it, but hearing this surface-level jargon was exasperating. It was the kind of thing, in 2019 at least, I would hear people say to - presumably - seem future-thinking and edgy without offering substance. It always made me feel like we were moving further from the goal. 

How does this connect to our current moment? At times, the conversations around AI’s potential are glib. They could be considered the 2025 version of “let’s just make an app” or “blockchain is the answer.” From where I sit, a lot of AI mentions in the past year were used as a way for people to sound smart or edgy without acknowledging the realities of it’s use - or reflecting on if it actually needs to be used. I have prodded a high-level executive to be more detailed on their “visionary” suggestion to work AI into our practices, and they struggled to answer. It’s all very, “go girl, give us nothing.”

Anecdotes and ranting aside, what am I suggesting we reflect on when we talk about AI? On the positives, we now have good ideas and case studies available for positive uses (for example, Project Evident’s work or IMACS), so there is plenty out there to help everyone come up with use recommendations when speaking about AI. But, I am more interested in the blind spots. Call it my natural New Yorker suspicion of anything that seems too good to be true. Mainly two questions came to mind: What are the negative effects of AI? Who is effected by them?

What are the negative effects of AI?

Here are a few key risks we take with generative AI use:

  • AI companies outbid local communities for resources: AI companies are winning over cities in bidding for renewable energy - taking from the locals to give to data centers. 

    “...some AI companies outbid cities and essential services for access to renewable energy, limiting its availability for the general population. In many regions, governments struggle to supply enough clean energy to meet the rising digital demands, leading to an increased reliance on fossil fuels. Instead of making energy more efficient, AI’s growing energy requirements risk slowing the transition to sustainable power sources. (The Dark Side of AI: How Artificial Intelligence is Harming the Environment)”

    Unfortunately, this is also true of the competition for clean water in cities in which it is already a scarcity. 

  • Climate and environmental effects across the full AI development, training and deployment cycle: Electricity and water needs for AI are high, not just in deployment and consumer use of the technology, but also with the training process (which is always in process considering the iterations that are continuously being developed to advance the technology). Development also includes toxic chemicals and resource mining, and transportation of materials has its own climate impacts. The demands on our resources will only increase - it’s not a one and done situation.

  • Damaging societal effects: Gen-AI applications can be actively counterproductive to society, such as by facilitating the spread of misinformation or delaying the retirement of fossil fuel–based power plants. AI will bring changes that “[risk] widening the digital divide between urban and rural areas. To harness its potential for all, policymakers must prioritise digital infrastructure, boost digital literacy, and support SMEs to ensure AI's benefits reach everyone and help tackle local skills bottlenecks. (Generative AI set to exacerbate regional divide in OECD countries, says first regional analysis on its impact on local job markets).”

Who is effected by them?

If we focus on climate, then the answer is: “Often, these fossil-fuel-dependent regions are situated in proximity to poor and underserved communities. These circumstances could perpetuate historical environmental inequities related to extreme heat, pollution, air quality, and access to potable water.”  (The US must balance climate justice challenges in the era of artificial intelligence) However, this also extends to economically disenfranchised communities when it comes to jobs and access to the right skills for a future with AI. Admittedly, the sources I reviewed note that we need to do further research as a community to understand in full what the societal implications will be. 

The point I’m making is that AI use is not without consequence. So, if you hear someone casually advocating for its use then they should probably come with a good reason as to why, an explanation for how it can be positively integrated, and how risks can be mitigated.

What can you do?

You can take the lead in prompting and facilitating these conversations! Here are 5 questions you can ask:

  • Do we really need to use AI for small tasks? More often than not the answer is no. Even when doing research, you’re allowing an AI algorithm to decide which sources are credible and worth including. (I’ve done it, too, but change is possible!)

  • Can we better advocate for responsible AI use and regulation? We should be pushing governments, AI companies and private funders and investors to exercise responsible use of AI and - as best as possible - prepare our communities for a future in which AI does further integrate into our daily lives and systems.

  • Is the AI application we are using transparent about its negative impacts on climate and communities? If you can’t assess the risk of using a technology, then you can’t make an informed decision.

  • Can we develop or adapt a smart benefit-cost evaluation framework? There have been incomplete assessments done in determining whether to use AI. We should have “benefit–cost evaluation frameworks that encourage (or require) Gen-AI to develop in ways that support social and environmental sustainability goals alongside economic opportunity. (The Climate and Sustainability Implications of Generative AI)” Here’s an example framework from Project Evident. 

  • How will the community we are trying to help be effected by our use of AI? Sometimes a proposed solution has short-term gains but results in long-term harm for marginalized and impoverished communities and Global South countries. It’s good to be aware of the catch-22.

Current fixations for your weekend reading

I went down a several rabbit holes over the past two weeks, but here are a couple of relevant themes for you. Scroll for the article links and briefs! 

  • AI’s potential to build and destroy

  • Spending down to non-existence

Just for fun 

@sparkishkid somehow finds a way to turn American rap songs into commentary on global systems and conspiracies. It’s hilarious every single time. Start with her video about OT Genesis’ CoCo and work your way through.

Screenshot of Instagram reel.

Until next time!

Safiya

Current fixations for your weekend reading

  • AI’s potential to build and destroy

The Climate and Sustainability Implications of Generative AI: As of today, there is a lack of regulation and actual limiting of generative AI resource use. This piece did a great job of outlining the problems and identifying the specific stakeholders and actions they can take to rectify negative implications of AI. One important point I want to pull out here is:

“...because much of the existing literature on AI ethics and policies comes from wealthy countries, the field is vulnerable to disproportionately catering to the needs and economies of wealthy countries as opposed to the priorities and challenges unique to the Global South. As governmental organizations invest in AI and develop AI regulations, civil society can leverage existing organizational structures to convey reactions and recommendations.”

Visual summary of the impacts of AI to consider

If you’re interested in reading more on AI and its climate and environmental impacts, here are a few other articles I found informative: 

The US must balance climate justice challenges in the era of artificial intelligence: Ensuring that the risks of AI are mitigates and the societal benefits are evenly distributed required deliberate action from policymakers, AI companies, consumers and civil society. The arguments presented were well articulated and actionable. 

“By centering justice, we can mold a more transformative and sustainable climate policy designed for an emerging digital future. Lawmakers must not only prioritize reducing the energy inputs and emissions associated with AI, but also improve the well-being of the most-impacted populations.”


The AI Wave is Here – And Too Many Funders Are Standing on the Beach: While skeptical of the AI enthusiasm and the “act fast, then fix it later” stance, the author did present some cool points on how we can take advantage of AI’s potential while enforcing responsible AI development and deployment. In short, they were: Develop AI-specific investment rubrics; Establish collective ethical frameworks addressing fairness, transparency, and safety;Provide program officers with training in AI literacy; Engage external technical and ethical advisors; and Partner with technology experts, academic institutions, and peer funders to share knowledge. 

Going ‘AI first’ appears to be backfiring on Klarna and Duolingo: If consumer reactions to Klarna and Duolingo’s integration of AI into their business models are any indication, the world will not simply accept AI. Both companies opted to replace human talent with generative AI, faced backlash and, in Klarna’s case, ended up rehiring people. AI models remain in early stages and need human oversight to avoid making big mistakes. One such case is the use of AI to assess claims to UnitedHealth Group - which we have found out had a 90% error rate and resulted in increased health bills or untreated ailments. The lessons to be learned here are (1) AI still requires human oversight; and (2) don’t deploy a new public facing AI tool without a proper public relations plan in place (see: Duolingo’s embarrassing, tone deaf social media post). 

Spending down to non-existence 

The $200 Billion Gamble: Bill Gates’s Plan to Wind Down His Foundation: You can never make me stan Bill Gates. Despite the positive impact the Gates Foundation has had, the best way forward will always be to tax the rich and not leave it up to the morals or discretion of wealthy private citizens to decide which causes are funded, how to approach them and ultimately how success will be evaluated. However, in the absence of good policy, we have to appreciate the wealthy who do choose to use their money for good and not to fund weird “feminist” space tourism trips; and the desire to spend down the foundation’s money. Here are a couple of interesting points from the article to reflect on:

  • Trend in countries decreasing ODA: “And you’re right that a single actor who’s like, Hey, we don’t want to give — collectively, it reduces the will of other people to give. When each country decides, OK, I’m sort of just going to take care of myself, it pushes other countries to at least think about that. So I’m sad to see defense budgets being increased, because that’s money that’s not going to human welfare either domestically or to help the poorest in the world. It’s a tragic thing.”  

  • Debt relief for SSA: “Sub-Saharan Africa is very challenged financially and with instability. Just the debts alone! And we should be doing what we did at the turn of the century, which is debt relief for all of these countries, to give them a clean balance sheet. But there isn’t the will right now.

  • Alarming incompetence of “efficiency” measures: “But that also means that when people cut these things, will they notice? They cut the money to Gaza Province in Mozambique. That is really for drugs, so mothers don’t give their babies H.I.V. But the people doing the cutting are so geographically illiterate, they think it’s Gaza and condoms. Will they go meet those babies who got H.I.V. because that money was cut? Probably not.”

Next
Next

The development landscape is changing - so should we.