The Ethical Ai Framework: Building Technology That Puts People First

  • Home
  • Blog
  • The Ethical Ai Framework: Building Technology That Puts People First

6 October 2025 Juliette onyegbunam Technology

Every time I hear someone say "we need an AI strategy" without first explaining what problem they're trying to solve, I die a little inside. And lately, I've been dying a lot.

The gold rush is in full swing. Every tech company is slapping AI stickers on things that were perfectly fine last week. Every conference has someone on stage promising that artificial intelligence will revolutionise absolutely everything. And every charity I meet is quietly panicking that they're falling behind, that they should be doing something with AI, that everyone else has figured it out and they haven't.

Here's what I need you to hear. Most of it is nonsense. Not the technology itself, the technology is genuinely remarkable. But the way we're talking about it, the way we're being sold to, the way we're being made to feel inadequate for not having figured it out yet, that part is absolutely nonsense.

So let's cut through it. Let's talk about what AI actually means for purpose-driven organisations, how to use it without losing your soul, and when to politely walk away.

 

The Question Nobody's Asking

 

Here's what I notice about every responsible, thoughtful organisation I've ever worked with. They ask the same question before adopting anything new: what's the downside?

Not because they're negative or resistant to change. Because they've learned that good intentions aren't enough. Because they've seen too many well-meaning initiatives create unintended harm. Because the people they serve can't afford for them to get this wrong.

With AI, that question matters more than ever.

I watched a charity last year test an AI tool that was supposed to help them triage support requests. It analysed incoming messages and suggested whether someone needed urgent help, could wait, or should be referred elsewhere. On paper, it was brilliant. Faster responses. Less pressure on overworked staff. More people helped.

But here's what happened in practice. The tool had been trained on data that didn't include certain accents, certain ways of phrasing things, certain cultural references. It systematically under-prioritised requests from the very communities the charity existed to serve. Not because anyone built it to be biased. Because bias is what happens when you don't ask the hard questions first. They switched it off within a month. And thank God they were paying attention.

 

The Boring Stuff First

 

Before you even think about AI, do me a favour. Get your data in order.  I cannot tell you how many organisations come to us wanting artificial intelligence when they don't have reliable basic information. Their donor database is a mess. Their service records are inconsistent. Their website analytics are a mystery because they never set them up properly.

AI doesn't fix that. It amplifies it. It takes your messy data and makes confidently wrong decisions at scale, faster than any human ever could.

The charities I know who are using AI well all did the same thing first. They spent a year, sometimes two, just getting their house in order. Cleaning up their data. Understanding what they had. Being honest about what they didn't. Building the boring foundations that make everything else possible.

It's not glamorous. It doesn't make for good funding applications. But it's the difference between technology that helps and technology that harms.

 

Where AI Actually Helps

 

Right, enough warnings. Let's talk about where this stuff genuinely works.

I'm going to give you three examples from organisations we've worked with. None of them are flashy. All of them made a real difference.

First, a charity supporting young people with mental health difficulties. They were spending hours every week moderating their online forum, reading every post, looking for signs that someone might be in crisis. We built them a tool that didn't replace that human oversight but flagged posts that might need attention first. The moderators still read everything. They just didn't have to hunt for the urgent stuff anymore. Response times dropped by about 60%, and nobody slipped through the cracks while the team was working through the queue.

Second, a community organisation running training programmes. They had years of feedback forms, evaluation data, follow-up surveys, all sitting in different places, never analysed properly. We helped them use some fairly basic machine learning to spot patterns. Which trainers got the best outcomes. Which formats worked for different groups. Where people struggled and needed extra support. Nothing magical. Just finding the signals in the noise they'd already collected.

Third, an environmental charity processing thousands of photos from citizen science projects. They needed to count things. Wildlife. Litter. Plant species. Things that humans are good at spotting but get bored of doing after about twenty minutes. We trained a model to do the initial pass, flagging images that definitely contained what they were looking for, so their volunteers could focus on the interesting ones. They processed three times as much data in half the time.

Notice what these have in common. The AI wasn't the star. It was in the background, doing boring work, making humans more effective at the stuff humans are actually good at.

 

The Red Flags

 

So how do you know if you're heading for trouble? Here are the things I watch for.  If someone promises AI will save you money before they understand your organisation, run. If they can't explain, in plain English, how the technology works and where it might go wrong, run faster. If they dismiss concerns about bias or privacy as people being "resistant to innovation," run and don't look back.

The best AI conversations I've had started with "here's what we're worried about," not "here's what we can do." The worst started with "everyone's doing it" and ended with expensive mistakes.

Also, pay attention to how you feel. If you're implementing AI because you're excited about what it could mean for the people you serve, that's a good sign. If you're implementing it because you're anxious about being left behind, that's a warning.

 

The Question That Matters

 

There's one question I come back to again and again with every organisation I advise. It's simple, but it cuts through almost all the noise. If this works perfectly, who benefits? And if it goes wrong, who pays?

The answers tell you everything. If the people benefiting are the same people who'd bear the cost of failure, you're probably in a good place. If they're different, if you're asking your community to absorb the risk while you enjoy the efficiency gains, you need to think again.

I'm not saying never take risks. Every meaningful innovation involves some uncertainty. But the risks should be shared, owned, acknowledged. Not hidden behind technical complexity or brushed aside with assurances that the technology is smarter than we realise.

 

Where We Come In

 

I'll be honest, we don't do a huge amount of AI work at ALWAYS 49. Not because we can't. Because most organisations aren't ready for it, and most problems don't need it.

But sometimes, rarely, the stars align. The data is clean. The problem is clear. The benefits are real and the risks are understood. And the difference AI can make is genuinely worth pursuing.

When that happens, we build things carefully. Slowly. With constant checks and human oversight and a willingness to stop if it's not working. No hype. No promises we can't keep. Just practical tools that help good people do good work.

If that sounds like where you are, let's talk. If it doesn't, honestly, that's fine too. The best technology decision is sometimes the one you don't make.

Thinking about AI for your organisation? [Talk to ALWAYS 49] about whether it's the right time. No hype. Just an honest conversation about what's possible and what's not..

Back to blog