Investigating Human Priors for Playing Video Games
Rachit Dubey
Pulkit Agrawal
Deepak Pathak
Tom Griffiths
Alexei A. Efros
University of California, Berkeley
ICML 2018
[Download Paper]
[Github Code]


Human gameplay on game version without any object priors Human gameplay on original game version

What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play.

Game links

Click on the images to play the different games.

Original Semantics Masked Semantics Reverse No object No affordance No similarity Ladder Gravity

Hard Mode Alert!

Expository Articles and Videos

Arxiv Insights made the above as part of their cool ML series.


R. Dubey, P. Agrawal, D. Pathak,
T.L. Griffiths, A. A. Efros.

Investigating Human Priors for Playing Video Games.
In ICML, 2018. (hosted on arXiv)


In the Media

Discussions on the web

Hacker News
Reddit Discussion

Selected media articles

MIT Technology Review
Import AI
Hitech News Daily

Blog posts in other languages



We thank Jordan Suchow, Nitin Garg, Michael Chang, Shubham Tulsiani, Alison Gopnik, and other members of the BAIR community for helpful discussions and comments. This work has been supported, in part, by Google, ONR MURI N00014-14-1-0671, Berkeley DeepDrive, NVIDIA Graduate Fellowship to DP, and the Valrhona Reinforcement Learning Fellowship.