Algorithms have become central objects of contemporary societies. They shape how we communicate, work, consume, govern, and imagine possible futures. Yet, despite their pervasiveness, algorithms are not merely technical systems: they are social, political, and cultural objects, embedded in relations of power, shaped by economic incentives, and infused with visions of what society should be. This conference, Socializing Algorithms, brings together scholars from across disciplines to examine algorithms not as isolated technologies, but as co-produced within social practices, institutions, and imaginaries. To structure this exploration, the conference is organized into three strands that address algorithms as socially embedded systems (Socializing Algorithms), as objects of regulation and instruments of governance (Governance and Accountability), and as sites for imagining alternative futures (Socio-technical Imaginaries).
1. Algorithmic Sociality
Algorithms have become central to both everyday life and the social sciences, yet debates about them are often shaped by persistent myths that portray them as either all-powerful or fundamentally inscrutable (Ziewitz 2016). Such narratives have intensified with machine learning and large language models, frequently framed as heralding a paradigm shift that threatens to devalue human skills and capacities. Critical perspectives, however, point to the limits of these technologies and emphasize how their spread is bound up with power, capital, and social structures.
Much of this debate rests on a quasi-essentialism that treats humans and algorithms as independent, autonomous entities. What often gets overlooked is how intertwined humans and algorithms are (cf. Amoore 2020; Crawford 2021; Hayles 2025; Suchman and Thimm 2024; Suchman 2006). Algorithms are deeply embedded in society,›socialized‹into existing norms, practices, and infrastructures, while at the same time actively reshaping and socializing those very norms and practices (Seyfert 2024). Much like earlier technologies that were initially contested before being integrated through social and institutional adaptation, they are not neutral or self-contained, but evolve through political negotiation, regulation, and everyday use (cf. Elias 1995). Proprietary markets, black-box architectures, and stark power imbalances between corporations, governments, and civil society shape how algorithms are embedded in social life (Crawford 2021). These imbalances are also reshaped by algorithmic systems that reorganize communication, labour, and public discourse.
To account for this›reciprocal socialization,‹we propose shifting the focus from the algorithmic object (its capabilities and inscrutabilities) and the human subject (its control and experience) to their interrelations. A genealogy and praxeology of the algorithm show an indivisible process that includes the entire circle, from its design and production to its regulation and everyday use. Algorithms are the product of interaction between economic interests, cultural expectations, regulatory institutions, developers and users. This relation is not a conflict-free process. Tensions repeatedly arise between different goals, values and stakeholder groups. Data protection, regulation and standardization conflict with data-driven optimisation and informational self-regulation; local needs contradict globally standardised platform architectures; regulatory interventions meet with resistance from the platform economy. A political perspective on co-production therefore not only questions the social conditioning of technology, but also how technology in turn stabilises or transforms social and political orders (Jasanoff 2004). Algorithmic co-production thus also encompasses questions of power, resource distribution, accessibility and use: Who defines the goals of an algorithm? Who decides on its design and use? Who uses it and how? And how do existing social structures affect these processes?
2. Governance and Accountability
The platform power of Big Tech has spurred regulatory efforts worldwide, yet with limited success in curbing their influence. High-profile cases — such as Elon Musk’s acquisition of Twitter, which ignited debates about misinformation,›platform exitism,‹and the role of platforms as public infrastructures — underscore the entanglement of corporate control with public life. More recently, the rapid uptake of generative AI systems like ChatGPT illustrates how technological developments are inseparable from the economic incentives and strategic interests of major AI companies (Luitse and Denkena 2021; Van Der Vlist et al. 2024). As algorithms spread across social life, debates have intensified not only about how they should be governed, but also about how they themselves function as instruments of governance (Amoore 2013, 2020; van Dijck et al. 2025; Jarke et al. 2024).
Against this backdrop, questions of governance and accountability become central to embedding algorithms within society. Mechanisms such as audits, standards, certification schemes, and regulatory review procedures aim to ensure that algorithms align with social norms and values. Yet governance takes many forms: from comprehensive frameworks like the EU AI Act, to sector-specific laws, to voluntary self-regulation by companies. In all these cases, the spread of algorithmic technologies is deeply entangled with technical innovation, regulatory oversight, economic incentives, labour relations, and the interactions of state, market, and civil society actors (Dijck et al. 2025, p. 14). No single model prevails, and regulators must constantly balance competing aims — fostering innovation and competition while minimizing risks and harms. As a result, accountability cannot be understood in purely legal terms, but must also include organizational practices and technical arrangements.
In light of these dynamics, this strand invites theoretical approaches and empirical studies that examine governance through the lens of regulatory co-production. From this perspective, governance is not merely external control, but a process that shapes what algorithms are and what they are expected to do:›They determine, in specific settings, what an algorithm is and what it ought to do‹(Seyfert 2022, 1542). Regulation can thus be understood as a social practice that formalizes relations, creates accountable orders, and inscribes obligations (Woolgar and Neyland 2020). These practices range from state policies and laws to organizational audits, managerial oversight, or even the habitualized use of algorithms in everyday life. Their material forms are equally heterogeneous – legal texts, technical standards, social conventions, metrics, or productivity goals. In all of them, the objects to be governed and the subjectivities of those affected are negotiated together, embedding governance into the very fabric of algorithmic life.
3. Socio-technical Imaginaries
Building on the themes of socialization and governance, this strand turns to the role of algorithms in shaping collective imaginaries. Sociotechnical imaginaries — shared ideas about what technology can and should achieve (Jasanoff and Kim 2015) — exert strong influence over how algorithms are designed, deployed, and regulated. Debates about algorithms are saturated with utopian promises of full automation or creative revolutions, alongside dystopian warnings of surveillance and cultural homogenization. Generative AI, for instance, is alternately framed as a cultural breakthrough or a threat to human creativity, while algorithmic tools in healthcare are depicted as either enhancing efficiency or raising pressing ethical concerns.
Socio-technical imaginaries do not operate only at the level of political rhetoric or corporate vision statements but are taken up, adapted, and contested across diverse sites of practice — in research labs, design studios, policy debates, and public discourse. They can legitimize massive investments into particular technological trajectories while marginalizing others, such as community-driven or non-commercial alternatives. Circulating globally yet locally reinterpreted, imaginaries produce tensions between universal promises of innovation and situated cultural, political, or ecological concerns. Tracing how imaginaries travel, become institutionalized, and are reworked in specific contexts makes it possible to see not only the futures that dominate but also the suppressed or emergent alternatives that might be reclaimed.
While many critical works have dissected the dominance of Silicon Valley solutionism (Morozov 2013), monopolistic platform models, or data-driven governmentality (Rouvroy 2013), this strand is equally concerned with alternative visions, with›visions of desirable futures‹(Jasanoff 2015, 4). Proposals such as a›digital stack for people and the planet‹(Rikap et al. 2024), visions of platform socialism (Muldoon 2022), the Non-Aligned Technologies Movement, or experimental uses of computer games for democratic ecological planning (Groos 2025) exemplify attempts to articulate futures beyond the dominant narratives.
What unites these approaches is a dual strategy of critique and constructive experimentation with alternative futures. Algorithms must be rigorously analysed for how they reproduce power (Beer 2017), but they also open pathways for imagining and enacting alternative forms of collective life and governance (Seibel 2016, 7). These pathways never emerge automatically and are not always desirable, but they highlight the need for ongoing democratic inquiry into what kinds of technologies we want, and for whom. This strand of the conference therefore invites contributions that explore imaginaries of algorithms – whether in the form of critical analyses, historical genealogies, or speculative and constructive alternatives – with the aim of examining how visions of technology shape, and are shaped by, social futures.
Aim of the conference and Call for papers
The Socializing Algorithms conference brings these strands together to examine algorithms as simultaneously technical systems, regulatory objects, and sites of future-making. By focusing on processes of socialization, governance, and imaginaries, the conference seeks to move beyond essentialist debates about whether algorithms are powerful or powerless, transparent or opaque, and instead highlights how they are co-produced in practice.
We are inviting scholarly contributions tackling questions along one of these streams and we hope the conference will serve as the foundation for a network of researchers and practitioners for further collaboration. The conference is hosted by the›Governing Algorithms‹project (DFG) at the Institute of Social Sciences at Kiel University (Prof. Robert Seyfert). The conference will be organised around keynotes, paper presentations, and discussions. It is in-person and requires the submission of an abstract and presentation of a paper. Remote participation or presentation is not possible. For accepted presenters we are able to cover travel and accommodation costs.
Please submit an abstract of no more than 300 words (excluding bibliography) by February 28, 2026.
Abstracts and further questions should be sent to the following email address:
Conference_2026(at)algorithmen-regieren.de
Confirmed invited keynotes include:
• N. Katherine Hayles, UCLA
• Louise Amoore, Durham University