Feeds:
Posts
Comments

Archive for the ‘science prizes’ Category

In a previous article I suggested that it becomes incumbent on the reader, listener, watcher or any engaged person to be able to tell when something in the media didn’t seem right or justifiable, etc. as in CNN and Evil – Snivil… I promised those that there are some rules of thumb for detecting faulty, deceptive or malicious content. The one selected is the Sagan Baloney Detection Kit. There are dozens of them out there on the web, in science methodology texts and even some in writing books. Like any set of rules of thumb, they are not absolute but provide an approximation that will save time and angst when sifting through the escalating volumes of content we have access to.

Like every source of ‘help,’ use what works for you and toss the rest. Know that those that want your eyeballs understand this list better than most of us and do what they can to keep you from recognizing these red flags in their materials.

Let us know what we missed and what we need to cull from our list.

Baloney Detection Kit: Based on selections are taken and similar to those in a book by Carl Sagan “The Demon Haunted World” Ballantine Books (February 25, 1997) ISBN-10: 0345409469

These collectively or individually are ‘red flags’ that suggest deception. The following are tools for detecting fallacious or fraudulent arguments wherever they present themselves.

  1. Wherever possible there must be independent confirmation of the facts
  2. Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  3. Arguments from authority carry little weight (in science there are no “authorities”).
  4. Try not to get overly attached to a hypothesis just because it’s yours, your parents, etc.
  5. Quantify, wherever possible.
  6. If there is a chain of argument every link in the chain must work.
  7. “Occam’s razor” – if there are two hypotheses that explain the data equally well choose the simpler
  8. Ask whether the hypothesis can, at least in principle, be falsified (shown to be false by some unambiguous test). In other words, it is testable? Can others duplicate the experiment and get the same result?
  9. Conduct control experiments – especially “double blind” experiments where the person taking measurements is not aware of the test and control subjects.
  10. Check for confounding factors – separate variables impacting the conclusions.

Common fallacies of logic and rhetoric

  1. Ad hominem – attacking the arguer rather than the argument.
  2. Argument from “authority”
  3. Monocausality: Cause and effect statements
  4. Argument from adverse consequences (focus on the dire consequences of an “unfavorable” decision; attack a sovereignty or you’ll be fighting them on the streets of New York).
  5. Appeal to ignorance (absence of evidence is not evidence of absence).
  6. Special pleading (typically referring to god’s will, Buddha’s mysteries or passions of Islam).
  7. Begging the question (assuming an answer in the way the question is phrased).
  8. Observational selection (counting the hits and forgetting the misses as in fortune telling).
  9. Statistics of small numbers (such as drawing conclusions from inadequate sample sizes).
  10. Misunderstanding the nature of statistics (President Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence!)
  11. Inconsistency (e.g. military expenditures based on worst case scenarios but scientific projections on environmental dangers ignored because they are not “substantiated”).
  12. Non sequitur – “it does not follow” – the logic falls down.
  13. Post hoc, ergo propter hoc – “it happened after so it was caused by” – confusion of cause and effect.
  14. Meaningless question (“what happens when an irresistible force meets an immovable object?).
  15. Excluded middle – considering only the two extremes in a range of possibilities (making the “other side” look worse than it really is).
  16. Confusion of correlation and causation.
  17. Straw man – caricaturing (stereotyping, marginalizing) a position to make it easier to attack.
  18. Suppressed evidence or half-truths.
  19. Weasel words – for example, use of euphemisms for war such as “police action” to get around limitations on Presidential powers. “An important art of politicians is to find new names for institutions which under old names have become odious to the public”

Read Full Post »

Tired of politics? football? hurricanes?

we’re on the eve of the LHC’s first proton beam.

Be sure to catch the live webcast here. Keep in mind all times are CEST timezone (pretty late for us on the west coast!)

More information here. and of course, on Wikipedia, Google, youtube and journals.

Read Full Post »

Mathematica is rad.

Machine learning is also rad.

Check out these fine demos and code files for some nice informatics and machine learning ideas.

Read Full Post »

N-brain is back at it with coding challenges.  This time it’s public and not secretive at all.  Over Christmas last year, N-brain launched a stealth coding competition to lure job applications and introduce the dev world to their new software development platform, UNA.  I participated in and analyzed this coding competition, almost simultaneously.  

Since then I’ve chatted with the folks behind n-brain.  They don’t mess around.  These guys know their code and their dev process.  This new competition is even tougher than the first and the prizes are much bigger.

UNA is a special platform.  Anyone who knows how I code and run projects understand how bold a statement that is for me.  Why?  I very much believe in the solo hacking til it works.  UNA is about group – real time collab.  I usually hate group collab on code and design because the communication and miscommunication gets in the way.  UNA is different because the collaboration is weirdly seamless and actually real time – you all see the same things, you chat inline, code completion just works, everything is tracked, and never once does the group feature take precedence over just coding.

Anyway… the coding competition is something to pay attention to.  I’ll be analyzing this one as well as I try to uncover more about the how, what and why of developer and creative behavior.  For some insight into how big and serious these types of competitions can get check out this recent feature in Dr. Dobbs.

As a final note – i sure hope the Visual Studio, Netbeans, Eclipse, Zend, Codeworks, and Nusphere folks pay attention to this and either integrate or buy N-brain.  Seriously, the system is that cool.  Yes, i know visual studio has team services…. trust me the features, price and “lockedin-ness” don’t compare.

~R

Read Full Post »

In October of 2006, Netflix released a $1,000,000 contest to improve their rating prediction/movie recommendation algorithms.  No one has won the prize yet (surprisingly).

I read the latest wired mag (i know, i know) which featured a contestant.  I’m easily inspired to work on difficult challenges.  I figure this will be good learning AND good research into software collaboration (one of my favorite topics). Yes, I’ve entered the fray as Team SocialMode.  Heck, I’ve been looking for a meaty project to put a bunch of my thoughts together.  This prize is perfect for that and it takes nothing more than access to cheap CPU cycles and a brain (both of those I have, which I have more of I don’t know.)

My approach will not be highly abstract.  Having built several Pay Per Click engines/behavioral targeting systems, simple content recommendation engines, search algos and a few collaborative filtering systems, my experience leads me to believe an improved algorithm will come from practical analysis of rating behavior, user interface behavior, exposure to movies to be rated (cognitive dissonance type concepts), clustering of practical movie meta data (e.g. I like anything with George Clooney, 10 explosions or more, or dinosaurs) and normalizing simple “flags” (all people dislike the Star Wars with Jar Jar, just need to adjust for an individuals rating scale).

Some assumptions of mine:

  • People rate things they haven’t seen
  • People rate in batches
  • People don’t rate as they watch
  • Viewing experience affects rating
  • Technical quality affects rating
  • There are Gender and Age differences
  • Every individual has a different 1-5 scale
  • It is cognitivily easier to rate something as 1 or 5 (love or hate) than 2-4
    • We deal with bits better than inbetween values
    • User interface widgets often times make it harder to rate inbetween value

I’ll have more to say on this.

My solution will be using Python and the Orange library and will utilize data from IMDB, BoxOfficeMojo and RottenTomatoes.

Some interesting links:

Let’s roll!  All progress will be posted.

~R

Read Full Post »