In this work we present an empirical approach for solving the grasp synthesis problem for anthropomorphic robots equipped with vacuum grippers. Our approach exploits a self-supervised, data-driven learning approach to estimate a suitable grasp for known and unknown objects. We employ a Convolutional Neural Network (CNN) that directly infers the grasping points and the approach angles from RGB-D images as a regression problem. In particular, we split the image into a cell grid where the CNN provides, for each cell, an estimate of a grasp along with a confidence score. We collected a training dataset composed by 4000 grasping attempts by means of an automatic trial-and-error procedure, and we trained end-to-end the CNN directly on both the grasping successes and failures. We report a set of preliminary experiments performed by using known (ie, object included in the training dataset) and unknown objects, showing that our system is able to effectively learn good grasping configurations.