Call for Papers: Volume IV Issue 3: Automata

2025-04-09

Guest Editor: Marco Vanucci, Opensystems Architecture

 

αὐτόματος

From αὐτο- (auto-, “self”) + Proto-Indo-European*mn̥tós, from *men- (“to think”) (whence μένος(ménos) and others).


The progressive encapsulation of human knowledge into technical systems—ranging from mechanical tools to AI-driven algorithms—has accelerated automation, expanding “the number of operations which we can perform without thinking” (Alfred N. Whitehead). While this has increased efficiency, scalability, and complexity in decision-making processes, it is also altering the role of human agency. Algorithmic processes have liberated architects and artists from the burden of idiosyncratic expression, rendering authorship multiple, decentralised, and participatory. Yet, as technical systems evolve toward self-learning and autonomous determination, the question arises: to what extent is human agency being displaced?

Autonomy, in the Kantian tradition, refers to independent deliberation and self-governance—the capacity to establish one’s own guiding principles. Niklas Luhmann’s theory of social systems describes how such systems evolve through autopoiesis: they develop within their own boundaries while responding to external complexity. These models contrast sharply with technical systems, where actions remain automatic—externally programmed and dependent on fixed rules and inputs. In other words, automation is the execution of predefined operations according to external rules. Thus, while automation optimises tasks, it remains fundamentally controlled by human intent—or at least, it has until now. 

 

With the rise of machine learning and predictive algorithms, technical systems are shifting toward self-regulation. As automation encroaches upon cognitive processes, concerns emerge over control, authorship, and responsibility. Are these technological developments merely a refinement of automation, or do they signal a categorical shift toward a new kind of autonomy? When predictive algorithms increasingly constrain human decisions, what space is left for judgment, imagination, or resistance? This shift challenges traditional notions of authorship and raises ethical questions around control and accountability. Are these systems merely sophisticated tools or are they something altogether different?

In 1960, Bruno Zevi’s sceptical review of Luigi Moretti’s parametric investigations—“Electronic brains? No, calculating machine”—foreshadowed the present dilemma. Contemporary “electronic brains,” however, operate through probabilistic reasoning, non-linear logic, and internal opacity. Polanyi’s idea of tacit knowledge—“knowing more than we can tell”—has been quantified, transferred and reproduced into technical systems, fuelling the rise of the black box: systems so complex that their inner working remains opaque and unintelligible. Here is the paradox: in the age of the smart machines, “we know less than we can tell.” Unlike deterministic algorithms, these systems operate through probabilistic non-linearity, making it increasingly difficult to distinguish between automation and autonomy.

 

In architecture, automation is usually equated with optimization and control. In contrast, visual artists have long embraced the unpredictable potential of smart technical systems. Surrealist automatism, for instance, aimed to bypass rational constraints and tap into unconscious creativity. There, technical systems become instruments of deviation, surprise and poetic rupture. They deceive (in Italian illudere, from the Latin ludere, to play), allowing us to act “as if,” to simulate, and construct meaning beyond rational bounds.

In games, in code, in architecture—creativity often emerges within the very constraints that seem to limit it. Just as the Surrealists surrendered to the logic of the dream, today, architects and artists might begin to exploit and instigate the misbehaviour of intelligent technical systems. Rather than seeking rational order or fine-tuning control, they could start to engage with the serendipitous nature of machines that dream, misfire, hallucinate. The challenge would be to reframe the automaton not as servants of efficiency, but as uncanny, autonomous enablers of latent significations. This issue of Khorein will investigate how automation—once a symbol of mechanistic repetition—might be reimagined as a site of creative and speculative autonomy. What new forms of agency emerge when machines no longer serve, but perform, subvert, or dream? No longer tools of control but enabling partners - unknowable, unruly, and strangely generative.

 

Topics:
#architecture #automation #autonomy #artificialintelligence #cybernetics

 

Submissions should be emailed to khorein@ifdt.bg.ac.rs.

Submission deadline: October 1, 2025