The first ColibrAI meeting, led by KTH, brought the group together to establish a shared foundation for thinking about AI as a matter of justice rather than merely technology. The starting point was ecofeminism, which offered that AI is not simply a technical system, but a structure of power. It shapes whose voices are heard, whose needs are prioritized, and whose lives, labor, and environments are made visible or rendered invisible. Although AI often presents itself as neutral and objective, the discussion emphasized how it encodes social, political, and economic values, frequently reproducing hierarchies that long predate digital technologies.

From this framing of AI as power, attention turned to what remains hidden within AI systems. The concept of epistemic omission, drawing on Gayatri Spivak's notion of epistemic violence, helped articulate how certain forms of knowledge and certain communities fall outside the frame of mainstream AI design. What is never collected, never labeled, or never modeled has material consequences. Examples ranging from healthcare research and urban planning to smartphone design and crash-test standards illustrated how data gaps shape whose realities matter and whose risks are ignored. The idea of “privilege hazard” further clarified how structural ignorance among designers can become normalized, embedding exclusion into technological systems without appearing intentional.

This focus on omission naturally extended to labor. AI systems depend on multiple layers of invisible work: the extraction of minerals such as cobalt and lithium, manufacturing under hazardous conditions, the maintenance of energy-intensive data centers, and the global workforce of annotators, moderators, and microworkers who sustain machine learning systems. Much of this labor is low-paid, repetitive, and psychologically taxing, yet remains obscured behind seamless interfaces. The discussion also acknowledged less visible forms of labor, including the work users perform through everyday interactions with digital platforms. Even educational institutions are affected, as technology companies increasingly displace or reshape roles traditionally held by teachers, shifting authority and responsibility within learning environments.

Environmental harm formed another crucial dimension of what lies "under the hood" of AI. Carbon emissions, water consumption, and the rapid growth of electronic waste were not treated as unfortunate side effects, but as structural features of contemporary AI infrastructures. The notion of green colonialism captured how environmental burdens are often displaced onto marginalized regions. Framing these issues together (epistemic omission, invisible labor, and ecological impact) made clear that AI operates as a global supply chain, entangled with extractive economies and uneven power relations.

Against this backdrop, ecofeminist AI was introduced not only as critique but as praxis. Rooted in anti-nuclear, Indigenous, and feminist movements of the 1970s, ecofeminism links environmental and social justice and insists on relationality, care, and plurality. Applied to AI, this perspective calls for reimagining technological development as ecological and justice-centered rather than extractive and profit-driven. It invites questions such as: Who builds? Who benefits? Who is erased? The discussion emphasized the need to move beyond individualized ethics toward structural justice, shifting the focus from compliance and bias metrics to transformations in governance, ownership, and accountability.

This shift also required epistemic work. Participants reflected on the importance of unlearning dominant narratives of progress and neutrality, critically engaging with existing technologies, and distinguishing between use cases that reinforce injustice and those that may enable resistance. Ideas such as subversion, hacking, optimism, and even "irritating" established systems emerged as strategies for intervening in power structures rather than accepting them as fixed. The idea of community-accountable AI emerged as a challenging and forward-looking question: what would it mean for AI systems to be grounded in genuine collective consent and community agency?

The session concluded by connecting these principles to concrete action. Justice-oriented initiatives such as DAIR, Data Workers' Inquiry, Data Against Feminicide, and Indigenous AI projects were discussed as examples of alternative approaches. Building on these inspirations, the group outlined plans for an LLM-based storytelling platform designed to surface erased narratives around invisible labor, environmental harm, and data gaps in marginalized domains. Rather than treating storytelling as an add-on, the project positions narrative as a way of making structural injustices legible and contestable.

The first workshop laid the intellectual and political groundwork for ColibrAI. By exposing the hidden infrastructures of AI and articulating ecofeminist AI as a relational, justice-centered alternative, the meeting reframed AI as a contested terrain.