There are some fascinating ideas / discussions brought up here. As an example: “.. it’s not enough for judges to predict how much privacy people actually expect in the face of new technologies; instead, they need to identify how much privacy we should demand in order to live in a free society rather than a police state …”
This article highlights an excellent distinction between “information” and “intelligence”, and describes the fascinating work a company is involved in that was wonderfully described by Philip K. Dick in his story Minority Report. The idea that people could predict outcomes in alternate futures is being translated into the computational realm through the use of highly evolved, and complicated, combined algorithms.
As you can imagine, there is much interest in this particular area of search technology as the implications, and impact, of a successful implementation (even within generous margins) are enormous. Precognition of future events by humans has long been relegated to being prophetic, in the mystical sense, and parapsychological, in modern cognitive theory. But with the majority of contemporary culture placing an increasing emphasis, reliance, and trust in computational outcomes, perhaps this branch of investigation will led to an outcome of dei ex machina as the resulting algorithmic software is sold and distributed across the globe.
"So what?", you might ask, "What’s so wrong with being able to, sort of, predict future outcomes?". Initially, my personal reaction was "wow, that’s cool", but then I started to think about the propensity for our species to meddle with variables to create favourable outcomes. Think about how an intelligence agency might want to use this predictive technology to identify, and then favourably mitigate outcomes. Think about how governments, public and private institutions, grey markets and shadow networks coud try and influence favourable outcomes in their respective spheres of influence. Think about the ethical and moral questions that arise from all of the above.
Now think about this scenario: I have the computational precognitive ability, and you don’t. Is the future that’s best for me, a future that’s best for you?