Managing fleet disruption is essential for an airline to control delay costs. Delays emerging from these disruptions can be manipulated through fleet operations like aircraft swapping. This paper applies machine learning techniques to the disruption problem. While airlines might do this process manually or using basic predefined rules, the complexity of the problem makes it well suited for a computed approach. The paper describes the principles of reinforced learning and the model used for testing them. Two representations of decision states are considered and applied to a set of historical schedules for an airline. The performance obtained by swapping aircraft using the reinforced learning is finally compared to the idle option, i.e., do not swap any flight. The comparison evinces that while the algorithm is far from being optimal, the agent takes relevant decisions as it performs better than the idle behaviour in heavily disrupted simulations.