These are chat archives for FreeCodeCamp/DataScience

12th
Nov 2016
evaristoc
@evaristoc
Nov 12 2016 13:12

People

US Elections Polls: what goes wrong

You might be exhausted about hearing what happened and trying to explain the current Trump win. In my opinion, the real problem is not simply saying that the polls were wrong. I think it is unfortunately two-ways.

On one side, it is a fact that the pollers made wrong conclusions based on wrong data, by assuming they have the correct one. In all cases, polling people's opinion is not easy: data might not lie, but people do. However, some pollers were careful in saying that the chances for Trump were still high, even when giving Clinton a head. For example, this article of the NYTimes gave to Trump a chance to win "the same as the probability that an N.F.L. kicker misses a 37-yard field goal". The tree chart of the article is really revealing of what happened in the past elections: Trump won by winning the key states that the tree chart showed.

Not every poller got it wrong: in this article on USA Today it is said that Los Angeles Times/University of Southern California consistently estimated the correct outcome by making right assumptions about the possible behaviour of Trump voters. And there was always some question marks, as this article that I brought about 2 months ago was telling.

The other problem is what the non-trained public opinion really understands about statistics. We usually have the tendency to reduce the complexity of the uncertainty by making a probability looks like certain. Hillary has a better chance to win, but the uncertainty was still high. However, the public opinion took a chance as a fact. That could also happen to the Hillary's Campaign Team, which ended up ignoring states that were doubtful by assuming that they were already strong holds.

This and other polling errors are a remainder that:

  • Statistics is about chances, no facts - this is always hard to explain to the non-trained audience, who is always after facts
  • They are efforts to model the reality, not the reality
  • From the two above, be aware that even with wrong data and wrong analysis, you can by coincidence land to the right conclusion - this has happened along the human's history
  • The conclusions are as good as the data and the analysis on the data; garbage in, garbage out
  • The analyst don't have all the information required to make a conclusion - there is information managed secretly or hidden (for example, between senior executives)
  • Events are not static ones - unexpected situations can change the course of events whenever accompanied by the drive of "making it happen" - many wars have been won/lost that way. Luck also occurs in Science but it has a fancier name: serendipity.

Big Data/ML etc alone are not going to be the answer: it is what we can really extract correctly and for our discussions morally from that information that really fits and generalises into the phenomena we want to explain.

evaristoc
@evaristoc
Nov 12 2016 14:32

People

canvas and d3.js

I have been recently working my way into canvas. REALLY exciting! My plan is to work with both, canvas and d3.js, in the future. For canvas you require even more knowledge of plain JS + a lot more of maths, and specially Linear Algebra for the large majority of existing examples.

Here an old dated link on how to work with both:
https://bocoup.com/weblog/d3js-and-canvas

Marc Lundgren
@marclundgren
Nov 12 2016 23:07
hi channel. I was directed here from the FreeCodeCamp/open-api project under Getting an API key http://bit.ly/2eOAHwr. Is this the right place to ask for either my own key, or even a shared one? I'm just curious to see if I'm able to use the open-api to see my progress