Algorithmic Societies


Algorithmic Societies: Ethical Life in the Machine Learning Age is funded by an ERC Advanced Grant (883107 ALGOSOC) awarded to Professor Louise Amoore, and based in Durham University’s Department of Geography. The project explores machine learning as an adaptive and iterative process of world-making. The research will advance a new approach to the ethics of a machine learning age, foregrounding the weights, assumptions, thresholds and parameters of algorithmic systems.

Louise Amoore talks further about the Algorithmic Societies project here:

Louise Amoore talks about some of the themes that animate the Algorithmic Societies project
Louise Amoore reflects on how machine learning potentially transforms how a society understand itself, its problems and politics

The Algorithmic Societies research has three objectives:

  1. Recognition & targeting: To understand how machine learning algorithms are generating a new societal ethics of recognition.

Our team are researching how specific deep learning algorithms are changing how societies come to recognise people, objects, scenes, and connections. The question of how we come to recognise others is central to the formation of notions of society, community, citizenship, political assembly, and participation. At one level, advanced deep learning systems are fundamentally changing societal norms of recognition because they identify people and objects in spaces, such as in the use of automated facial recognition or in biometric border controls. However, machine learning is also making possible behavioural models that redefine the boundaries of who or what can be recognised in a society or polity. What is at stake is not only who or what is recognised, but how the regime of recognition generates qualified claims as adjudicated by the algorithm.

2. Attribution & difference: To analyse how societal differences are generated and attributed through machine learning algorithms.

The computational search for clusters and attributes radically alters the making of difference in our societies. The algorithm understands groupings in a fundamentally distinct way – as clusters of attributes that are detectable in data. Our team are researching how data derivatives are extracted and how meanings are attributed to clusters. They are mapping how algorithmic processes of attribution generate new parameters of sameness and difference. The project investigates how assumptions, bias, and thresholds within machine learning become incorporated into the building of models, and how attributes become reincorporated into algorithmic systems.

3. Inference & futures: To investigate how machine learning generates inferential models of the future, and to understand the consequences of new forms of inference for the ethical relations of contemporary societies.

Our team are researching what happens when specific algorithmic forms of inference enter automated decision systems, and how they act upon the future. It is hoped that the findings from this part of the project will inform the mapping of spaces in the inference pathway where alternative connections and pathways could be built. This is particularly important in areas such as criminal justice and health care, where inferred futures have a powerful impact on people’s life chances.