As robots become commonplace, and for successful human-robot interaction to occur, people will need to trust them. Two experiments were conducted using the "minimal group paradigm" to explore whether social identity theory influences trust formation and impressions of a robot. In Experiment 1, participants were allocated to either a "robot" or "computer" group, and then they played a cooperative visual tracking game with an Aldebaran Nao humanoid robot as a partner. We hypothesised participants in the "robot group" would demonstrate intergroup bias by sitting closer to the robot (proxemics) and trusting the robot's suggested answers more frequently than their "computer group" counterparts. Experiment 2 used an almost identical procedure with a different set of participants; however, all participants were assigned to the "robot group" and thee different levels of anthropomorphic robot movement were manipulated. Our results suggest that intergroup bias and humanlike movement can significantly affect human-robot approach behaviour. Significant effects were found for trusting the robot's suggested answers with respect to task difficulty, but not for group membership or robot movement.