An intelligent agent will, inherently, be an autonomous agent. Assuming this thesis is pertinent, it becomes necessary to clarify the notion of autonomy and its prerequisites. Initially, the difficulties inherent in developing ways of thinking that make it effective must be acknowledged. In fact, most individuals deliberate and decide on concrete aspects of their lives yet are unable to do so critically enough. This requires a complex set of prerequisites to be met, which we make explicit. Among them is the ability to construct hypothetical counterfactual scenarios, which support the analysis of possible futures, thereby leading the subject to the construction of a non-standard identity, of his own preference and choice. In the realm of AI, the notions of genetic algorithms and emergence, allow for an engineered approximation of what is viewed as autonomy in humans. Indeed, a machine can follow trial and error procedures, finding unexpected solutions to problems itself, or in conjunction with other machines. In theory, though we are mindful of the difficulties inherent in the construction of autonomy, nothing in principle prevents machines from attaining it.