The VAST Challenge has many unique features (like the generation of synthetic data sets with injected ground truth) and this year for the first time it features a predictive analytics and design mini-challenge (Stephen Few’s talked about it too here) you should definitely check out.
We talk with Prof. Georges Grinstein from UMass Lowell and Celste Paul from NSA. They give us lots of details about how the data is generated, how the entries are evaluated and how it looks like participating to the contest.
You guys should actually give it a try and rock it!
[Sorry no episode chapters this time]
- KDD Cup: main contest in data mining
- TREC: main contest in text retrieval
- Benchmark data sets from InfoVis Contest
- Visual Analytics Benchmark Repository (all past VAST Challenge editions)
- Sumedicina: telling fictional stories with charts (see explanation here)
- Plaisant, Catherine, J-D. Fekete, and Georges Grinstein. Promoting insight-based evaluation of visualizations: From contest to benchmark repository. Visualization and Computer Graphics, IEEE Transactions on 14.1 (2008): 120-134.
- Pascale Proulx and Casey Canfield. The beneficial role of the VAST Challenges in the evolution of GeoTime and nSpace2. Information Visualization. May 10, 2013 preprint
- Christian Rohrdantz, Florian Mansmann, Chris North, and Daniel A Keim. Augmenting the educational curriculum with the Visual Analytics Science and Technology Challenge: Opportunities and pitfalls. Information Visualization. April 11, 2013 preprint
- Jean Scholtz, Catherine Plaisant, Mark Whiting, and Georges Grinstein. Evaluation of visual analytics environments: The road to the Visual Analytics Science and Technology challenge evaluation methodology. Information Visualization. June 11, 2013 preprint
- Costello, Loura, et al. Advancing user-centered evaluation of visual analytic environments through contests. Information Visualization 8.3 (2009): 230-238.