Complex decision-making problems require efficient optimization techniques to balance competing objectives and constraints. Among these problems, combinatorial optimization problems (COPs) are of particular interest because of their large, discrete, and often highly structured solution spaces. These problems are frequently constrained by computational limitations, which makes the identification of high-quality solutions particularly challenging. Search-based methods are therefore commonly employed to address COPs, but their success is highly dependent on the design of effective optimization strategies. Although these methods can be highly effective, they often struggle to maintain performance when the complexity of the problem increases or the landscape of the problem evolves. Static algorithm configurations, which are predetermined and fixed throughout the optimization process, are particularly vulnerable in such scenarios. They lack the flexibility needed to adapt to new challenges, potentially failing to explore promising regions of the solution space, or becoming trapped in suboptimal areas. In response to these limitations, there has been growing interest in learning-based methods for the dynamic control of algorithm parameter configurations and operator selection in real-time. These methods treat the control of optimization algorithms as a sequential decision-making problem, drawing on concepts from machine learning, particularly reinforcement learning. Unlike traditional approaches that rely on static configurations, dynamic control methods can adjust search strategies or algorithm parameters as optimization progresses. This adaptability allows the algorithm to respond to changes in the problem environment, by adapting to the evolving problem landscape. By incorporating real-time feedback into the decision-making process, they significantly enhance the efficiency and effectiveness of the search, allowing algorithms to better navigate complex and large solution spaces, escape local optima, and respond flexibly to changes in the problem environment. This dissertation explores methods for the dynamic control of search-based algorithms, specifically in the context of COPs. We propose frameworks that incorporate real-time feedback from the optimization process to guide the selection of operators and the adjustment of algorithm parameters. In our work, we emphasize the development of algorithmic control strategies that have been designed to perform effectively across diverse problem types, instances, sizes, objectives, and domains, minimizing the need for problem-, or domain-specific configuration.