Non-intrusive face authentication and biometrics are becoming a commodity with a wide range of applications. This success increases their vulnerability to attacks that need to be addressed with more sophisticated methods. In this paper we propose to strengthen face liveness detection models, based on photoplethysmography (rPPG) estimated pulses, by learning to generate high-quality, yet fake pulse signals, using Deep Convolutional Generative Adversarial networks (DCGANs). The simulated liveness signals are then used to improve detectors by providing it with a better coverage of potential attack-originated signals, during the training stage. Thus, our DCGAN is trained to simulate real pulse signals, leading to sophisticated attacks based on high-quality fake pulses. The full liveness detection framework then leverages on these signals to assess the genuineness of pulse signals in a robust manner at test-time. Experiments confirm that this strategy leads to significant robustness improvements, with relative AUC gains > 3.6%. We observed a consistent performance improvement not only in GAN-based, but also in more traditional attacks (e.g. video face replay). Both code and data will be made publicly available to foster research on the topic.