Recent technological advances in surgery have permitted cellular and molecular imaging to be carried out intra-operatively. Although optical biopsy techniques such as probe-based confocal laser endomicroscopy (pCLE) have enabled real-time diagnosis and tissue characterisation in vivo, the flexibility of the probe introduces significant challenges under manual control. Examination of large tissue areas is particularly challenging due to micron-scale resolution of the probe and the need for maintaining consistent probe orientation and force contact with the tissue to avoid cellular deformation or damage. The use of a robotic manipulator to perform surface scanning automatically introduces great benefits in terms of positioning repeatability and accuracy. However, pre-programming of such complex task is not realistic due to patient-specific anatomy and constant changes in tissue morphology during the operation. To overcome this problem, a cooperative, in situ microscopic scanning and simultaneous tissue surface reconstruction technique is proposed. The system provides a hands-on, learning-based framework for optimal trajectory coverage from surgeon-demonstrated motions intraoperatively. The position and force information acquired during the scanning are also used to simultaneously reconstruct the surface morphology and combined with the pCLE images to generate a 3D functional map of the tissue.