May 22, 2020 · Ethics

AI ethics, for the people actually building it

Dimple Paratey
Dimple Paratey
Chief Marketing Officer
AI ethics, for the people actually building it

The word "ethics" has a PR problem in our industry. It's become associated with either long, abstract debates about superintelligence, or with earnest-but-vague corporate "AI principles" pages that no engineer has ever read.

I'd like to suggest a smaller, more useful frame: ethics as a set of habits.

Not rules. Not principles. Habits — the little things a careful practitioner does almost without thinking, the way a surgeon scrubs in. You don't need a philosophy degree to pick them up. You just need to want to do the work well.

Here's the short list I keep coming back to.

1. Ask who gets hurt if this fails

Every AI system has failure modes. The question isn't whether it will fail — it's who bears the cost when it does.

A recommendation system that gets it wrong costs the user a mild annoyance. A diagnostic tool that gets it wrong might cost someone their health. A parole-prediction system that gets it wrong might cost someone their freedom.

Before you build anything, write down who pays the cost of a mistake. If the answer is "someone who has no power in this situation," you are taking on an ethical responsibility, and you should pause and ask whether you're the right people to be doing this work.

2. Look at your data with the eyes of someone it excludes

Every dataset has a silent demographic — the people who aren't in it. If your facial recognition training data was sourced from public photos on Western social media, it will work worse for older people, darker-skinned people, and people who don't post photos of themselves online.

This isn't a hypothetical. It's happened, many times. The habit is simple: before training, imagine the person your dataset doesn't know about, and ask whether your system will fail them quietly.

3. Show people what you learned, not just what you decided

An AI that says "denied" without explanation is an AI that can't be challenged, audited, or improved. "Denied because your application scored 0.34 on feature X, which is below our threshold of 0.5" is infinitely better — even if the feature itself is debatable.

Explainability is not a solved problem, but "we tried and here's what we can tell you" is a very different posture from "trust the black box."

4. Treat private data like you'd treat your own

This sounds obvious. It isn't, in practice. Ask yourself: would I be comfortable if my mother's medical data were handled the way we're handling our users' data here? Her loan application? Her emails?

The answer is not always "no" — sometimes careful, anonymised, limited-purpose use is completely fine. But running the check keeps you honest.

5. Plan for the exit

When the project ends, what happens to the data, the model, the inferences made about individuals? Too many AI systems have no end-of-life plan, and the data outlives everyone's memory of why it was collected.

Before you collect a single record, write down how it will be deleted. Then set a calendar reminder.

6. Say "no" sometimes

The most important habit, and the hardest one.

Sometimes a client will ask for something that makes you uncomfortable — a predictive policing tool, a sentiment-driven HR system, a dark pattern dressed up as personalisation. Sometimes the discomfort is worth exploring; sometimes it's a signal to walk away.

We've walked away from work before. It cost us money. It's never cost us sleep. The reverse is the trade nobody talks about.

On the bigger questions

The existential-AI debates are fascinating, and they matter, but they shouldn't be where most practitioners spend their ethics energy. The harder work is the daily work: this dataset, this feature, this decision, this user. The small, careful, habits-of-a-good-practitioner work.

If an AI system ever does something terrible, it will not be because the team didn't read the right philosophy paper. It will be because someone, on an ordinary Tuesday afternoon, didn't ask the question a more experienced colleague would have asked.

The habits above are how you become that more experienced colleague. Over time. Quietly. With practice.

A small pledge

We don't claim to always get this right. We try. When we get it wrong, we try to notice faster and fix it better than last time.

If you're thinking about the ethics of something you're building and would like a friendly second opinion, we'd be honoured to be that. Book a chat. No judgement.

Dimple Paratey
Dimple Paratey
Chief Marketing Officer

As CMO of Partech Systems, Dimple Paratey drives technological innovation with over 15 years of digital transformation leadership at major telecom providers. Her expertise in transforming enterprise operations has delivered breakthrough solutions for global telecommunications companies. Recognized for her strategic vision in AI adoption, she champions the intersection of innovation and business growth across multiple industries.