The AI Conundrum: Will AI penalize grassroots nonprofits and continue to perpetuate wealth inequality?

Wednesday, May 1st, 2024

Originally Published in ASU Lodestar Center Blog (link here)


In the ever-evolving landscape of charitable giving, the emergence of AI-powered tools such as CharityAI by DonationX raises important questions about fairness, transparency, and impact on charitable giving. As we plunge into the realm of automated charity rankings, we're compelled to ask: How are nonprofits ranked? Do these systems level the playing field, or do they deepen the chasm of inequality, stacking the odds against those who need support the most?

My journey into this conundrum began unexpectedly as I delved into a grant application for the small nonprofit I'm involved with. They were seeking funding through a Foundation associated with a Southern California sports team. However, even before submitting through the Foundation's portal, our nonprofit faced a setback with its lackluster rating on CharityAI, a technology that is still awaiting patent approval. This experience prompted a deeper exploration into the mechanics of such rating systems and their implications for newer and smaller nonprofits.

AI tools like CharityAI and other ranking systems have the potential to exacerbate inequality within the nonprofit sector for several reasons. Firstly, these tools often rely on quantitative metrics such as financial health, social media engagement, and digital presence to assess nonprofits. While this approach may seem objective, it fails to capture the nuanced and context-dependent nature of nonprofit work, disproportionately disadvantaging smaller and newer organizations that may not have the resources to compete on these metrics. Additionally, by prioritizing easily quantifiable factors, AI tools risk oversimplifying the real-world impact and effectiveness of grassroots nonprofits, further marginalizing them in favor of larger, more established organizations.

Moreover, the use of AI tools in nonprofit evaluation raises concerns about accountability and transparency. As these systems assign ratings and rankings based on algorithmic calculations, there is limited visibility into the criteria and processes used, potentially leading to biased or inaccurate assessments. This lack of transparency undermines the sector’s principles of fairness and inclusivity.

The broader backdrop of wealth inequality in the US amplifies the repercussions of AI-driven ranking systems. Concentrated resources among a privileged few afford larger organizations the ability to manipulate or exploit these systems to their benefit, thus widening the disparity between them and smaller nonprofits. Furthermore, the diversion of funds from taxes into private trusts and foundations curtails the reservoir of resources accessible for public good, exacerbating the difficulties confronting the broader populace and working classes.

In essence, using AI ranking tools as a short cut in nonprofit evaluation poses a significant risk of perpetuating and deepening inequality within the charitable sector. Without careful consideration of their implications and proactive measures to promote fairness and inclusivity, these tools may undermine the very mission of philanthropy by reinforcing existing power dynamics and marginalizing those most in need of support.

Despite the noble efforts of approximately 100,000 new nonprofit entrants into the US philanthropic arena each year, their ascent is frequently stymied. In fact, about 30% of nonprofits fail to meet the decade (10-year) mark, and almost 40% of nonprofits report less than $100,000 in annual revenue. Sadder still is that nonprofits employ about 10% of the US workforce, and funds diverted from the wealthy away from taxes and into pre-approved trusts, DAFs, and “familiar” Foundations actually hurts the general public and our ability to fund social services like schools and infrastructure improvements. 

As we navigate this new, AI-infused landscape, it is imperative to uphold principles of fairness, inclusivity, and accountability. Rather than diminishing nonprofits to a predetermined ranking or predicting their success without full perspective, we must foster an ecosystem where organizations of all sizes and missions are supported and empowered to thrive. Do we desire a world in which we outsource our brains to the extent that critical thought is completely thrown out the window and innovative, new, and small nonprofits are thrown out the door - making it even harder for them to compete effectively for what little dollars are available?

To address these challenges, stakeholders, including AI developers, philanthropists, policymakers, and nonprofit leaders, must collaborate to ensure that AI tools are designed and implemented in a way that promotes fairness, inclusivity, and social impact. This may involve incorporating qualitative assessments alongside quantitative metrics, providing support and resources for smaller nonprofits to improve their digital presence and data collection capabilities, and cultivating more transparency and accountability in the use of AI in philanthropy. By doing so, we may be able to mitigate the risks of exacerbating inequality and marginalization in a sector that is tasked with promoting positive societal change - before it’s too late.

Previous
Previous

Overcoming Bullying

Next
Next

Grant-seeking essentials: Your checklist before investing in a nonprofit grant writer