Responsible AI

Can we ensure fairness in Smart Cities?

Larissa Suzuki
7 min readMar 28, 2021


In 2021, it’s safe to say that the Internet, the World Wide Web, and the myriad of technologies derived from their development are all here to stay. With the ceaseless amalgamation of these various innovations, engineers are creating a cyber-physical world where pervasively interconnected objects, things, and processes can potentially unlock a breadth of unprecedented opportunities. However, I should point out that encapsulating the entire medley of possibilities afforded by these technologies is a considerable endeavour requiring a far longer and more comprehensive overview — perhaps in the form of a book, or three — than this article can offer in isolation. As such, I’ll concentrate on something closer to my own work: smart cities. More specifically, I’ll be focusing on the potential for us to optimally — and transparently — manage and operate city-wide infrastructure.

A possible future

Given the increasing ubiquity of these devices throughout society, digital technologies and AI offer a new wave of opportunities to turn information into actionable insights — creating a balance between social, environmental, and economic opportunities. These opportunities, in turn, can then be delivered through smart city planning, design, and construction. Machine learning (ML) techniques are transforming how we capture, process, inspect, and analyse data to impact everything from water and energy management to traffic, autonomous vehicles, law enforcement, and healthcare. ML has the potential to revolutionise urban services.

However, we may never realise the above potential unless we implement the correct design approaches. The idea of what constitutes a ‘smart city’ is currently moving towards a design approach that predominantly relies on ICT deployments that follow a top-down approach. There are several problems with this. If we create and distribute ‘smart things’ without ensuring that they’re actually relevant to and usable by everyone, then at best we would be inconveniencing a large portion of the population. More dangerously, applying ML techniques to design services that don’t account for the full socio-cultural, economic, or political spectrums could potentially result in a more stratified and biased society.

Data and AI

Today, cities face up to thousand-fold increases in data, even when compared with just a decade ago. Ultimately, the increase stems from a combination of factors: the proliferation of these internet-enabled technologies (such as the Internet of Things and modern smartphones), open data initiatives, and user-generated content. While this data theoretically affords new ways to optimise existing processes, the important thing to keep in mind, here, is that the data isn’t socially benign. Across all levels of data generation, processing, analysis, and distribution, data is influenced by specific contexts, views, ideals, and even the type of technology being used. As such, data and services are highly susceptible to the choices and constraints of a system driven by public, political, financial, ethical, and regulatory opinions and considerations. Again, there are problems with this — data and services are therefore subject to infection by social privilege and/or social values.

As I highlighted earlier, there is a vast pool of potential for data and innovations in digital services to provide useful information and tools for managing and improving city services. That said, this isn’t to suggest that everything is utopian and that we don’t need cautionary measures. The politics behind (and limitations of) such data, as well as the methods used to produce and analyse them, will require a meticulous examination of the values and agendas that underpin them. It’s also important that we are transparent about whose interests they serve. Data and services need to be complemented by a range of other instruments, policies, ethics, and practices that are sensitive to the diverse ways in which cities are both formed and function. By excluding this information, we will exclude various groups from the provision of services.

Data bias, AI, and social stratification

Removing the veil from big data analytics and intelligence can ensure that services are provided fairly.

As cities and governments rely more on automation and ML, there’s a chance that these problems will only garner the necessary attention when they become social issues. As a classic example, a lack of neutrality in digital services lies in Amazon’s ‘same day delivery’ service. Bloomberg’s research uncovered that Amazon excluded the service from many African communities in the USA. While Amazon claimed that racial information was not embedded in their algorithms for services targeting American cities, the lack of comprehensive examination into demographic data led to the exclusion of many potential valuable costumers.

As another example, recent research has raised questions about how data bias can affect basic access to the city itself — looking at how access varies for elderly and economically-deprived citizens (Glasmeier 2015; Offenhuber 2015). In Boston, for instance, less privileged neighbourhoods experienced a significantly higher digital divide and struggled to report city maintenance issues through digital channels when compared with other, more privileged areas.

Therefore, as cities become more heavily based on automation and ML, city management needs to remain highly transparent. Similar to the various requirements of GDPR, they will need to publicly disclose any intentions behind the provided services, the nature and scope of any collected data, the context in which the data is repurposed and used to decide how services are delivered (and to whom) and, lastly, the reasoning behind such decision making. Removing the veil from big data analytics and intelligence can ensure that services are provided fairly.

Guiding AI’s development

Artificial intelligence is likely to have a significant and widespread impact on the nature of jobs. On one hand, engineering innovation in this space can drastically increase productivity and profitability, and will hopefully guarantee future food, transportation, water, and energy provision. However, an issue arises if we don’t then properly legislate the resulting data and technologies. Without adequate legislation, developments could lead to a widespread displacement of jobs and a disproportionate effect on economies, which more often than not perpetuates the lower-quality-of-access in disadvantaged communities. What’s more, a lack of transparency around the data sources or technologies significantly reduces accountability.

Thankfully, the ethical and social impact of smart city services is a thriving and challenging field of study. Looking at recent research, such as either Julia Angwin’s study of racism in criminal justice algorithms, or Kate Crawford and Ryan Calo’s study on the broader impact and consequences of artificial and disruptive technologies in societies, it all highlights the need to better comprehend how data and digital technologies underpin smart city services in order to manage them.


Given that disruptive systems increasingly rely on large-scale (and sometimes sensitive) datasets — issues associated with trust and privacy, ownership, bias, transparency, and fairness will continue to grow. These issues are highly complicated and interwoven in ways beyond the scope of this article, but to provide an example — let’s consider the implications of an automated decision-making system. Depending on the nature of the data, the context in which it gets recorded, the way(s) it is analysed, and how it is distributed, a system such as this could undermine people’s privacy through the expansive use of personal information, or lock-in existing societal biases that appear in the underlying data. This poses important questions about the balance of transparency with anonymization. While the overarching processes need transparency for accountability, any effective data infrastructure needs mechanisms to anonymize personal data.

Once again, engineers and computer scientists have a role to play here. Rather than using anonymization algorithms prone to failure, blockchain-based interfaces — although a novel technology largely still in the experimentation phase — have the potential to produce new mechanisms of anonymity or pseudonymity by making it far harder to link data back to an individual. Blockchain enables the automated transfer of value across digital networks, without the need for intermediaries.

Interpreting the data

The bias effect on even a small proportion of a data set propagates throughout the final data analysis.

Both correctly interpreting urban environment data, and extracting insights from that data, are not straightforward tasks. Biasing effects are a known phenomenon in information systems research, and it has been extensively found in decision-making processes in urban environments. For instance, research conducted by Kahneman and Tversky (Tversky 1974) demonstrated that though big data applications are fed with vast amounts of heterogeneous data, the bias effect on even a small proportion of the data is propagated throughout the final data analysis. If we measure the performance of a city through data, we potentially exclude vast portions of its population from those metrics due to biases that the algorithms picked up from datasets or overfitting. This creates a tendency for such intelligent algorithms to generalise the bias in future predictions, responses, and policy-making instead of providing equitable and non-biased solutions.

Moving forward

Given the rapid pace of technological and software innovation and the comparatively slower pace of legislation, there is an inherent need for neutrality to be a fundamental component of data and its surrounding processes. Data and services in smart cities must be neutral and objective when reporting information about the city environment. They should encompass the entire population and respect data licenses, regulation, and privacy laws. In a similar fashion, the digital services and the backbone technology — including algorithms — should be free from any ideology or influence in their conception, operation, integration, and dissemination.

Understanding the changes and impacts that directly or indirectly result from the design of digital services in smart cities is important. If people can better comprehend these services, they can plan the appropriate strategies to manage any risks and extract the full potential that disruptive technologies can deliver.

Ensuring service neutrality and fairness in smart city design can enable the creation of fairer, accessible, safer, more secure cities. As we continue the rapid pace of innovation, these cities can, therefore, avoid becoming exploited as a commodity.

This aligns with Harvey’s research on our Right to the City:

The right to the city is far more than the individual liberty to access urban resources: it is a right to change ourselves by changing the city. The freedom to make and remake our cities and ourselves is one of the most precious yet most neglected of our human rights”.

Also available at:

Based on “Data as Infrastructure for Smart Cities” (Suzuki & Finkelstein, 2018)



Larissa Suzuki

Engineer, inventor, entrepreneur, philanthropist • #Data/AI Practice Lead, #AIEthics fellow, Interplanetary Internet @google • Prof @ucl