I grew up in a city that was chaotically, wonderfully alive. Bangalore in the '90s — power cuts at dinner, street food at midnight, auto-rickshaw drivers who refused to use the meter and got you there faster anyway. Nothing about it was "smart" in the technology sense. It worked because of millions of small, human adaptations.
I think about that city a lot when I read about "smart cities."
The concept, as usually marketed, involves sensors on every corner, AI deciding traffic flows, facial recognition in public spaces, and data dashboards for mayors. Some of this is genuinely useful. Some of it is surveillance dressed up as municipal innovation. The line between them matters enormously.
Let me try to draw it carefully.
Where AI is helping cities, quietly
Traffic signal optimisation. Most city traffic lights were set decades ago with rough rules of thumb. AI-adaptive signal systems adjust in real time to actual traffic, and can reduce urban congestion by 10-20%. This is both environmentally and economically significant, and it doesn't require facial recognition.
Predictive infrastructure maintenance. Water main breaks, pothole formation, road degradation — AI models looking at sensor data and historical patterns can predict where problems will happen and let cities fix them before they fail. Cheaper, faster, less disruptive.
Energy efficiency in municipal buildings. Schools, hospitals, offices — all the buildings a city runs cost enormous amounts in heating and cooling. AI-driven HVAC optimisation can cut energy use by 20-30% with no capital cost. This is being widely deployed, quietly.
Emergency response optimisation. AI models that predict where ambulances, fire trucks, and police will be needed next help pre-position resources. Response times drop by meaningful amounts. Lives get saved.
Accessibility mapping. Routing apps that help people with mobility disabilities find accessible paths through cities — avoiding steps, broken pavements, unlit areas. This isn't glamorous smart-city tech. It's life-changing for the people it serves.
Waste collection routing. Modern cities increasingly use AI to route waste trucks based on bin fill-levels (sensed remotely). Fewer trucks, fewer miles, less noise at 6am. Small win, but it compounds.
Where AI in cities goes wrong
Surveillance creep. The same cameras that help with traffic can be used for facial recognition. The same sensors that help with noise pollution can be used to identify specific people. Once the infrastructure is in place, it's politically hard to constrain what it's used for.
Lock-in to specific vendors. "Smart city" platforms are often sold by a small number of vendors with proprietary stacks. Cities that sign these deals often lose the flexibility to change providers, open their data, or adjust their use over time.
Bias in public services. AI systems that decide where to send police patrols, or how to prioritise housing inspections, can reinforce historical biases in ways that hurt already-underserved communities. This is well-documented and keeps happening.
Exclusion by technology. Cities that increasingly require apps, digital IDs, or online interactions for basic services exclude the elderly, the homeless, and the recent arrivals. "Smart" that excludes isn't smart.
Security and privacy disasters. City-scale AI infrastructure is a huge attack surface. Breaches have happened. Cities are generally less resourced than private-sector equivalents to defend them.
What good "AI for cities" looks like
A few patterns I admire:
Barcelona's Decidim platform. Not flashy. Uses AI to help citizens meaningfully participate in city decisions — surfacing proposals, translating across languages, summarising discussions. Democracy-enhancing tech, not democracy-replacing tech.
Estonia's X-Road. Their nation-wide digital infrastructure uses careful data sharing between agencies, with strong consent and transparency rules. Citizens can see who has accessed their data and why. This is a foundation you could build healthy AI on top of.
Copenhagen's approach to climate data. Open APIs, public dashboards, citizen-scientist partnerships. Data the city collects is shared back with residents. Builds trust. Enables innovation.
Amsterdam's algorithm register. The city publishes a public register of the algorithms used in municipal decisions — what they do, what data they use, how they can be challenged. This should be the baseline for every city deploying AI.
What I'd want from the city I live in
If my local council were building an AI strategy, I'd want:
- Transparent use. Every AI system used in decisions about residents should be publicly documented.
- Data minimisation. Collect what's needed for the task, keep it only as long as needed, and don't combine it casually.
- No facial recognition in public spaces, full stop. Too much risk, too little democratic accountability.
- Open interfaces. Vendor lock-in is a civic-infrastructure risk. Cities should own their data and their standards.
- Genuine consultation. Before an AI system is deployed, residents should have meaningful say. Not a comment form. A conversation.
A quiet ambition
The city I grew up in isn't the city it was. It's cleaner in places, wealthier overall, and has many more sensors and many fewer auto-rickshaw drivers. I'm not sure on balance it's better. But the best version of technology-in-cities would keep the aliveness while smoothing the genuine frictions — the floods, the pollution, the broken bus routes, the slow emergency response.
We can build that version. It's a choice.
If you're a city, a civic-tech team, or a planner thinking about where AI fits, we'd be delighted to help you think. Especially the consent-and-transparency parts. Those are dear to us.