Computing and learning generalized patterns of behavior that address multiple related problems is a longstanding goal in AI planning. These directions of research bridge that gap between generalized planning, lifted sequential decision making (Boutilier, Reiter, & Price, 2001; Sanner & Boutilier, 2005, 2009; Cui, Keller, & Khardon, 2019) and approaches for learning generalized control knowledge and heuristics for solving multiple planning problems (Khardon, 1999; Winner & Veloso, 2003; Yoon, Fern, & Givan, 2008; Shen, Trevizan, & Thi'ebaux, 2020; Toyer, Thi'ebaux, Trevizan, & Xie, 2020; Rivlin, Hazan, & Karpas, 2020; Garg, Bajpai, & Mausam, 2020; Ferber, Geisser, Trevizan, Helmert, & Hoffmann, 2022; St?ahlberg, Bonet, & Geffner, 2022).Existing results on theoretical aspects of generalized planning, especially on determining reachability and termination properties of generalized plans on arbitrary problem instances have been restricted to specific types of graph structures (Srivastava, Immerman, & Zil berstein, 2012), limited dimensions of variation among problem instances (Hu & Levesque, 2010) or to limited expressiveness in the semantics of actions (Srivastava, Zilberstein, Im merman, & Geffner, 2011).Intuitively, the framework developed in this paper goes beyond existing efforts by devel oping a new process for the analysis of generalized plans expressed as finite-state machines without the restrictions noted above, viz., it permits arbitrary structures and actions that can increment or decrement variables by specific amounts in non-deterministic or determin istic control structures.Such methods have been shown to be useful in the form of sketches for planning (Bonet & Geffner, 2018; Frances, Bonet, & Geffner, 2021; Bonet & Geffner, 2021; Drexler, Seipp, & Geffner, 2022), generalized heuristics (Karia & Srivastava, 2021) and generalized Q-functions for reinforcement learning in stochastic settings (Karia & Srivas tava, 2022).4.1.3.