Explainable Artificial Intelligence systems, including intelligent agents, are expected to explain their internal decisions, behaviors and reasoning that produce their choices to the humans (or to other systems) with which they interact. Given this context, the aim of this article is to introduce a practical reasoning agent framework that supports generation of explanations about the goals the agent committed to. Firstly, we present an argumentation-based formalization for supporting goal reasoning. This is based on the belief-based goal processing model proposed by Castelfranchi and Paglieri, which is a more granular and refined model than the Beliefs–Desires–Intentions model. We focus on the dynamics of goals since they are desires until they become intentions, including the conditions under which a goal can be cancelled. We use formal argumentation reasoning to support the passage of the goals from their initial state until their final state. Secondly, in order that agents based on the proposed formalization be able to generate explanations about the goals they decided to commit to, we endow them with a mechanism for generating both complete and partial explanations. Finally, we use a scenario of rescue robots in order to illustrate the performance of our proposal, for which a simulator was developed to support the agents goal reasoning.