Our ability to play games like chess and Go relies on both planning several moves ahead and on recognition or gist - intuitively assessing the quality of possible game states without explicit planning. In this paper, we investigate the role of recognition in puzzle solving. We introduce a simple puzzle game to study planning and recognition in a non-adversarial context and a reinforcement learning agent which solves these puzzles relying purely on recognition. The agent relies on a neural network to capture intuitions about which game states are promising. We find that our model effectively predicts the relative difficulty of the puzzles for humans and shows similar qualitative patterns of success and initial moves to humans. Our task and model provide a basis for the study of planning and intuitive notions of fit in puzzle solving that is simple enough for use in developmental studies.