One of the most active research areas in computed tomography (CT) is to devise a strategy to reduce radiation exposure, while maintaining high image quality, required for accurate diagnosis. The recent advancements offered by deep learning based data-driven approaches for solving inverse problems in biomedical imaging have led to the development of an alternative method for producing high-quality reconstructed images from low-dose CT data. While most of the reconstruction approaches tackle the problem from a post-processing perspective, in this paper, inspired by the idea of unfolding a proximal gradient descent optimization algorithm to finite iterations, and replacing the proximal terms with trainable deep artificial neural networks, we propose an end-to-end solution for reconstructing full-dose tomographic images directly from low-dose measurements. The framework is designed to encapsulate the knowledge of the physical model of CT image formation, and to produce high-quality images that account for human perception through a Generative Adversarial Network with Wasserstein distance and a contextual loss. The proposed method was validated on a clinical dataset, and promising results have been obtained compared to the state-of-the-art mean-squared-error (MSE) based learned iterative reconstruction approach, while also maintaining a runtime suitable for a routine clinical setting.