We present a model selection framework for Learning Rate-Free Reinforcement Learning that selects the optimal learning rate on the fly during RL training. This approach of adaptive learning rate tuning neither depends on the underlying RL algorithm nor the optimizer and solely uses the reward feedback to select the learning rate; hence, the framework can input any RL algorithm and produce a learning rate-free version of it.