Global leaders trying to stop AI (What that means for your nonprofit)

With so much uncertainty, I wanted to find a way to add value to you, non-profit leaders, with no intent to sell you anything here. I figured I'd create these Weekly Emails I call Monday Motivational Minute because Mondays are historically the roughest days, and a minute . . . well because you're very busy people. In these, I'll add creative, quick, actionable takeaways for you to try (All in a minute read). No pressure, no need to respond, read at your leisure or unsubscribe if it doesn't add value.

---------------------

Happy Belated Sunday! 

October the 22nd, 2025 CHANGED THE AI CONVERSATION forever; have you heard? 

Over 800 global leaders and celebrities, like Apple co-founder Steve Wozniak, former advisors for The Trump AND Obama administrations, and Prince Harry and Megan Markle openly demanded an immediate stop on AI "superintelligence" development by tech giants like Google, OpenAI, and Meta. This isn't  conspiracy theory, like discussing what's under the bunkers at DIA..... aliens of course.🤷🏾‍♂️ 

Meanwhile, your nonprofit is probably already using AI for fundraising, grant writing, and program management without formal policies protecting your organization, staff, or clients. Sound familiar? 

Here's what nonprofit leaders, like you, need to know: 64% of Americans want a safety-first approach to AI development (Pew Research), yet 92% of nonprofits have ZERO formal policies governing AI use, or data privacy, But, in your defense, with everything on your plate, is this your highest priority and even more than that, where do you start?
(Sorry, I have to defend you on that one) 

The coalition, organized by Future of Life Institute (a nonprofit focused on existential risks), cited major concerns: automation destroying jobs, digital technologies outpacing human oversight, and serious threats to civil liberties. 

Meanwhile, this is happening now: Your staff is pasting PII into public AI tools, while outputs may be logged, reviewed, or used for training, breaching privacy obligations. Meanwhile your board has no idea what AI risks exist in your sector or their so paranoid they don't even let you whisper the letters A.I. in their direction, leaving you innovating without guardrails. 

I truly believe that AI is what you've been waiting for; super high value, with low costs, yet responsible AI adoption isn't optional anymore, it's about protecting your mission, your clients, and your organization's credibility as public trust becomes increasingly important. 

The Questions Your Board Should Be Asking: 

• Do we have documented policies for AI tool usage?
• How are we protecting client data when using AI platforms?
• What safeguards ensure AI enhances rather than replaces human-centered service?
• Is our board informed about factual unbiased AI risks and opportunities? 

Actionable Task: Luckily I did 98% of the work for you. Get your FREE editable AI Guidelines: 

Email me directly and I'll send you the comprehensive AI Policy Guidelines I created specifically for you. It includes: 

• Staff AI usage policies
• Data privacy protections
• Client service safeguards
• Board oversight frameworks 

Just reply to this email with "Send AI Guidelines" and I'll get it to you. 

Remember: There's a responsible place to leverage AI; that's up to US to define. The tech giants won't do it for us. Ethics don't come programmed in the code. They come from the humans brave enough to draw boundaries before harm happens.🙏🏾

Regis Arzu  (he/him/his)
[email protected]
CoVoice | CEO
347.748.5078

P.S. Know another ED holding it all together with grit and grace? Forward this their way. We’re not meant to lead alone, and together, we go farther.