While on-board sensing equipment of CAVs can reasonably characterize the surrounding traffic environment, their performance is limited by the range of the sensors. By integrating short- and long-range information, a CAV can comprehensively construct its surrounding environment, thereby allowing it to plan both short and long-term maneuvers. Coalescing local information and downstream information is critical for the CAV to make safe and effective driving decisions. While literature is replete with CAV control approaches that use information sensed from the local traffic environment, studies that fuse information from various temporal-spatial instances to facilitate CAV movements is limited. In this paper, we propose a Deep Reinforcement Learning (DRL) based approach that fuses information obtained (via sensing and connectivity) on the local downstream environment for CAV lane changing decisions. We adopt learning-based techniques to provide an integrated solution that incorporates the information fusion and movement-decision processor. We also determine the optimal connectivity range for each operating traffic density. We anticipate that deployment of the proposed algorithm in a CAV will facilitate reliable proactive driving decisions and ultimately enhance the overall operational efficiency of CAVs in terms of safety and mobility.