Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0


The third ColibrAI meeting, led by Aalto, focused on how design and participation can actively reshape AI's trajectories, examining how institutional, cultural, and infrastructural conditions surrounding AI might be transformed. The guiding concern was not only what AI systems obscure or reproduce, but how their development and governance could be reconfigured through participatory practices.

A foundational premise framed the discussion: design is not simply the production of discrete objects, but the cultural production of new forms of practice. From this perspective, AI systems are not isolated technical artifacts. They are embedded in social routines, material infrastructures, and epistemic hierarchies. Designing AI, therefore, cannot be separated from redesigning the environments that sustain it. Questions of technical optimization inevitably lead to questions of power, expertise, and whose knowledge is recognized as authoritative.

This perspective made it necessary to examine participation itself. Participation is frequently presented as inherently democratic, yet it is not automatically emancipatory. It holds the potential for empowerment, but also for illusion. When participatory exercises are added onto systems whose architectures, business models, and predictive logics remain unchanged, inclusion risks becoming symbolic. Participation was therefore framed not as an invitation, but as a negotiation: who defines the problem, who sets the criteria of fairness, and who has the authority to intervene at the infrastructural level.

The traditions of participatory design, particularly those influenced by participatory action research, offered a way to move beyond token engagement. These traditions emphasize mutual learning, distributed experimentation, and infrastructuring rather than isolated product development. This shift from focusing on individual products to examining the larger systems and infrastructures that support them became a key point in the discussion. If AI systems are sustained by data pipelines, governance structures, evaluation metrics, and educational frameworks, then meaningful participation must engage these broader assemblages rather than only the visible interfaces of AI tools.

Within this framework, an important distinction emerged between designing with AI and designing AI. Designing with AI treats it as a creative medium within collaborative or speculative processes. Designing AI positions it as an object of critical intervention, requiring scrutiny of its architectures, training data, and embedded assumptions. This distinction highlights a dual responsibility: experimenting with AI as a medium while simultaneously interrogating the epistemic and infrastructural conditions that shape its operation.

A central thread throughout the session was whether the dominant terms of AI discourse can be shifted. Instead of focusing on efficiency, automation, and optimization, participatory and speculative design approaches open space for imaginaries grounded in care, plurality, relationality, and collective world-making. Projects such as AI-assisted participatory interventions with queer nightlife communities illustrated how speculative laboratories can cultivate counter-practices and counter-spaces. These initiatives demonstrate that AI can be engaged not only as a tool of standardization but as a medium for questioning and reimagining social and urban futures.

A review of current research landscapes underscored the importance of this orientation. While design-related AI research is expanding, explicitly justice-centered, decolonial, and participatory approaches remain comparatively marginal. This gap underscores the need for interdisciplinary collaboration that integrates technical expertise, critical theory, and creative practice. Such collaboration does more than merge perspectives; it creates the possibility of a shared epistemic space capable of challenging technocentric and Western-centric assumptions.

In this framing, community-centric AI is not limited to identifying harms or inviting feedback. It entails building infrastructures that redistribute voice, authority, and responsibility. It requires questioning who is authorized to define problems, imagine futures, and shape technological systems.

In addition to conceptual discussions on participatory design, the session also included what was described as “Adventure(s) with SCOPUS and AI”. This exercise revisited the literature review process itself as a site of inquiry. By experimenting with different search strings in SCOPUS, combining terms such as AI, participatory design, inclusivity, decolonization, and speculative design, the exploration revealed not only thematic trends but also structural gaps.

Searches that explicitly included inclusivity, intersectionality, or decolonial keywords produced surprisingly few relevant results, especially from design-oriented perspectives. When these terms were removed, the number of results increased substantially, but the justice-centered focus weakened. This “adventure” highlighted how database logics, keyword conventions, and disciplinary boundaries shape what becomes visible as legitimate research. It reinforced a central insight of the session: knowledge production around AI is itself infrastructural and political. Even the act of searching the literature reflects and reproduces dominant imaginaries, revealing how much work remains to articulate and consolidate justice-oriented, participatory approaches within AI and design research.