Since human moral decisions often have an emotional coating—either through empathy or antipathy—it is necessary to address the possibility of developing emotionally motivated machines. This is considered one of the limits of AI when doubting its progress. “Machines will never be emotionally motivated, for they have no endocrine system to endow them with emotions.” This thesis systematically ignores the role of emotions in humans, as well as what we should really expect from cognitive machines. First, we will highlight that, from a functional viewpoint, emotions play an anticipatory role, preparing possible responses for an organism. Then we will caution that emotions are not a human particularity, since there are many other species that have them too. Hence, there is nothing to prevent a machine from being able, in advance and using its power to conceive counterfactuals, to conjecture alternative possible answers. In the future we will not have fearful or sad computers, but there will be within them the role that fear or sadness play in our decision-making processes. Even based on what is already achievable today, we will soon have robots capable of interpreting and interacting with human emotions.