Abstract
Traditional finance and macroeconomic models usually assume people can form rational expectations or reach them via a learning path by minimizing prediction errors. The recent Reference Model Based Learning (RMBL) model provides a new perspective: It hypothesizes that people minimize surprises instead of errors. Following the spirit of Simon's "satisficing" criteria, RMBL predicts that they will minimize errors only when the prediction error exceeds a threshold. We conduct meta-analyses based on 18 Learning-to-Forecast Experiments (LtFEs; N=41,490). Our results from the horse race test consistently show that student participants minimize surprises instead of errors in the LtFEs. In contrast, the results based on the data from the Survey of Professional Forecasters (SPF) show no evidence that they minimize surprises. Together, our results suggest that minimizing surprises by implementing RMBL may be the simple procedure people employ when navigating complexity in forecasting.