Speed Perception Results are In

By ,

We are excited to reveal the first results from the SpeedPerception challenge! We had over 5,000 sessions completed, with over 50,000 valid data points.

We tested three hypotheses, of which two were confirmed:

  • No single metric can explain human choices with 90%+ accuracy
  • Visual metrics will perform better than non-visual/network metrics
  • Users will not wait until “visual complete” to make a determination

For those of you unaware, SpeedPerception is a free, open-source, benchmark dataset of how people perceive above-the-fold rendering and webpage loading processes, which can be used to better understand the perceptual aspects of end-user web experience. The benchmark we’ve posted on Github can provide a quantitative basis to compare different algorithms. Our hope is that this data will spur computer scientists and web performance engineers to make progress quantifying perceived web performance.

Thank you to everyone who participated. While we’ve posted the initial findings on Github, we will be releasing additional results. We appreciate feedback on both the study and results, and suggestions for next steps. If you want to analyze the data yourself and test your own hypotheses, the data and code are all available on Github. Please do share any results and conclusions with us.

Thanks to Parvez Ahammad, Clark Gao, Prasenjit Dey, Pat Meenan and the entire web performance community for helping make this study a reality.