Jun 152017
 

Time for another roll up of relevant links!

Yes, what follows are all quick reads that I should have posted as I saw them, but let’s admit it: keeping this pace up when there is no expectation it can be used is pretty depressing. I started this site to (in part) seed a small repository with articles my students might use as examples of topics of interest to scholars in this field. Honors seminars, leadership development classes and more are places where we offer not just content but also practices and temperaments of scholars in our field. How better to show deliver these than by sharing notes on what I follow?

What a shame that neither the CS department nor Honors College actually want seminars, leadership development classes or more variety in our courses – not unless they are taught by one of the ‘in crowd.’

Some of us may be cultural pariahs but that doesn’t mean we stop learning, thinking or critiquing, so without further ado, here are some recent examples of how to be a skeptical scholar.

The Most Important Scientist You’ve Never Heard Of is a great narrative about the discipline, objectivity and passion that scholarship demands … and the advocacy value with which it is rewarded.

Bullshit is a commodity much in supply on this campus. Exercise for the students: see if you can apply tips below to sniff out which campus programs shovel bovine scatology as compared with offering you genuine value. The Baloney Detection Kit: Carl Sagan’s Rules for Bullshit-Busting and Critical Thinking is timeless even if a bit lacking in specifics. Pocket Guide to Bullshit Prevention gives you another compact checklist. But in How to call bullshit on big data you can read about how scholars elsewhere teach ways to combat BS. (How interesting that that campus and ours treat the same topics so differently. One teaches methods that the other teaches must be sniffed out. Well.)

What’s the effect of sham scholarship? You publish papers like The Conceptual Penis as a Social Construct: A Sokal-Style Hoax on Gender Studies. That paper is 100 percent USDA choice bullshit, and it made a splash as such on the internet when word of it recently came out. Then there is Dog of a dilemma: the rise of the predatory journal – more of the same essentially. You will see these periodically. (Save them when you do, and even think about posting them here!) The business model of scholarship is supposed to filter out material like that before it piles up and starts to smell up our fields. Yet … there they are linked. It is probably useful to ponder what equivocal assertions are populating our fields if even these blatantly silly works can appear. It is even more useful to ponder what scholarly practices will help you tell which are which.

Then there are the thorny examples. Daryl Bem Proved ESP Is Real is one. Figure that one out! There is in general a lot of valid discussion of just how well some fields conduct experiments too. Why we can’t trust academic journals to tell the scientific truth takes that up, but there are many other threads around the web too. Take for example Data, Truth and Null Results.

Let’s make sure our work will be cited in others’ blogs because it illustrates great qualities and positively influences the field … not because it is a teaching example of what other schools what students to know how to sniff out!

 Posted by at 9:11 am on June 15, 2017
May 192016
 

Students in my classes know how often I advise them to “call your shots” – that is, do an honest and personal self-assessment of performance (whether on a project, in a class or on your job.) Only by genuinely understanding the difference between your aspirational and actual outcomes (and why they came to be different!) can you become effective at bringing the two into alignment. Lies are things we tall at a bar, not what we tell ourselves. (Another colloquialism from Purtilo: “Never believe your own b******!”)

Doing such an assessment is often difficult, not only because it brings us face to face with outcomes that are sometimes short of what we wanted. It takes practice. So when we see good examples of how this is done (and especially with analysis as to why, so one can improve) we really sit up and take notice.

That’s the case this morning with a spectacular assessment from Nate Silver, at his site fivethirtyeight.com. Silver’s piece is titled “How I Acted Like A Pundit And Screwed Up On Donald Trump” and he goes into excellent detail about the statistical methods that worked (or sometimes didn’t work) in his predictions on this year’s races to date.

Forthright assessments are a hallmark of serious scholars, and I commend this to you as a great example. This should be all the more interesting to some on campus since of course Silver had been one of the First Year Book authors on our campus.

 Posted by at 11:06 am on May 19, 2016
Feb 242016
 

Bad science (and sometimes difficult science done poorly) happens all the time, and periodically we re-blog examples as reminders. This edition’s sampling is titled for a truism quoted in the first article about ways sham scientists plump up their own work and try to dominate in the literature: “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”

In the ‘difficult science done poorly’ category we have another reminder from a nice piece in the Atlantic about challenges in reproducibility of studies.

But what’s a reminder like this without one of the old standby topics, the publication of utter drivel.

Let’s be a bit more discriminating out there when it comes to what we might believe in the literature.

 Posted by at 6:22 pm on February 24, 2016