Daily Links 04/08/2014

Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.

Apache Mahout, Hadoop’s original machine learning project, is moving on from MapReduce — Tech News and Analysis

While data processing in Hadoop has traditionally been done using MapReduce, the batch-oriented framework has fallen out of vogue as users began demanding lower-latency processing for certain types of workloads — such as machine learning. However, nobody really wants to abandon Hadoop entirely because it’s still great for storing lots of data and many still use MapReduce for most of their workloads. Spark, which was developed at the University of California, Berkeley, has stepped in to fill that void in a growing number of cases where speed and ease of programming really matter.

Some diseases and conditions have a long lead time, developing so slowly, they may never actively threaten our health. When we’re diagnosed early with those, we’ve “had” the disease for longer, but we don’t live longer. All we’ve done is increase our “disease survival” by shortening the “disease-free” part of our lives.