Dec 292017
 

It’s always nice to see examples where careful research challenges popular dogma, so in that spirit I flag an article about measurement of rates at which uninsured people use emergency rooms (first noticed in a Washington Post article about same.)

Healthcare insurance is of course not squarely in our software/cyber wheelhouse, but the skeptical “show me” temperaments illustrated here surely are. Many very expensive policy decisions have been sold to the public on the premise that uninsured people rely on emergency room visits in place of regular health care offerings, with a consequent increase in costs and clogging of the facilities. The authors (who publish their work in the above Maryland-based journal) study the question and report the ER visit rates of insured and uninsured people are comparable.

The threat to validity of old lore might be one of perception, they say; uninsured people are not as commonly seen in the waiting rooms of non-emergency offices, so their appearance in emergency rooms is noticed more. In other words, it wasn’t that uninsured people used the ER more so than insured people so much as they used regular facilities so much less. (To which we would say: “Duh.”)

The cited paper has more nuggets regarding linkages between costs and quality of care, and we commend to you a reading of the full paper – just as we commend a willingness to question, not just accept assertions that “sound right.” Especially if assertions are made to you from people who have a business interest in willing them to be true.

 Posted by at 9:35 am on December 29, 2017
Nov 142017
 

Two recent articles add to the list of materials that students in my lab should ponder.

The first deals with limitations of statistics in science, or at least, limitations in our understanding and application of statistics. This is an on-going topic for our data scientists to track.

A NYT article on NSA Shadow Brokers is especially worthy of your consideration, since so many of our present projects involve analysis and prediction of security-related properties.

To see how the above two readings are modestly related to one another, think about what data are used to predict opportunities to penetrate a site, what data predict potential intrusions over time, and what data are used to track uses of exfiltrated materials. Then … think about whether the science behind each is equally-well developed or applied. What limits someone performing those activities and how would scientists offer that person stronger tools? There are some great research activities lurking in the answer to that question.

 Posted by at 7:54 am on November 14, 2017
Nov 062017
 

I often talk with my students about bias when we are conducting a research project. Usually this discussion launches with me getting on a soap box to throw around words like old school methods and sound experimental design (also met with small eye rolls they think I don’t notice.) My extended rant will cover a lot about how to conduct risk reduction exercises, which after a fashion resemble agile development methods except in hacking our insights instead of code. Overall we’re pretty interested in knowing how to make good engineering decisions on use of our time; we want the greatest illumination for least cost on each step along the way as we converge to results.

With that in mind, a nice read about bias is The Trouble With Scientists. This reminds us that while there are the things we want to know, we need discipline to ensure we’re not just cherry picking data to support a conclusion we already want to reach; we need discipline to force ourselves to look coldly at what we don’t know. These go hand in hand.

 Posted by at 3:47 pm on November 6, 2017
Jun 152017
 

Time for another roll up of relevant links!

Yes, what follows are all quick reads that I should have posted as I saw them, but let’s admit it: keeping this pace up when there is no expectation it can be used is pretty depressing. I started this site to (in part) seed a small repository with articles my students might use as examples of topics of interest to scholars in this field. Honors seminars, leadership development classes and more are places where we offer not just content but also practices and temperaments of scholars in our field. How better to show deliver these than by sharing notes on what I follow?

What a shame that neither the CS department nor Honors College actually want seminars, leadership development classes or more variety in our courses – not unless they are taught by one of the ‘in crowd.’

Some of us may be cultural pariahs but that doesn’t mean we stop learning, thinking or critiquing, so without further ado, here are some recent examples of how to be a skeptical scholar.

The Most Important Scientist You’ve Never Heard Of is a great narrative about the discipline, objectivity and passion that scholarship demands … and the advocacy value with which it is rewarded.

Bullshit is a commodity much in supply on this campus. Exercise for the students: see if you can apply tips below to sniff out which campus programs shovel bovine scatology as compared with offering you genuine value. The Baloney Detection Kit: Carl Sagan’s Rules for Bullshit-Busting and Critical Thinking is timeless even if a bit lacking in specifics. Pocket Guide to Bullshit Prevention gives you another compact checklist. But in How to call bullshit on big data you can read about how scholars elsewhere teach ways to combat BS. (How interesting that that campus and ours treat the same topics so differently. One teaches methods that the other teaches must be sniffed out. Well.)

What’s the effect of sham scholarship? You publish papers like The Conceptual Penis as a Social Construct: A Sokal-Style Hoax on Gender Studies. That paper is 100 percent USDA choice bullshit, and it made a splash as such on the internet when word of it recently came out. Then there is Dog of a dilemma: the rise of the predatory journal – more of the same essentially. You will see these periodically. (Save them when you do, and even think about posting them here!) The business model of scholarship is supposed to filter out material like that before it piles up and starts to smell up our fields. Yet … there they are linked. It is probably useful to ponder what equivocal assertions are populating our fields if even these blatantly silly works can appear. It is even more useful to ponder what scholarly practices will help you tell which are which.

Then there are the thorny examples. Daryl Bem Proved ESP Is Real is one. Figure that one out! There is in general a lot of valid discussion of just how well some fields conduct experiments too. Why we can’t trust academic journals to tell the scientific truth takes that up, but there are many other threads around the web too. Take for example Data, Truth and Null Results.

Let’s make sure our work will be cited in others’ blogs because it illustrates great qualities and positively influences the field … not because it is a teaching example of what other schools what students to know how to sniff out!

 Posted by at 9:11 am on June 15, 2017
May 192016
 

Students in my classes know how often I advise them to “call your shots” – that is, do an honest and personal self-assessment of performance (whether on a project, in a class or on your job.) Only by genuinely understanding the difference between your aspirational and actual outcomes (and why they came to be different!) can you become effective at bringing the two into alignment. Lies are things we tall at a bar, not what we tell ourselves. (Another colloquialism from Purtilo: “Never believe your own b******!”)

Doing such an assessment is often difficult, not only because it brings us face to face with outcomes that are sometimes short of what we wanted. It takes practice. So when we see good examples of how this is done (and especially with analysis as to why, so one can improve) we really sit up and take notice.

That’s the case this morning with a spectacular assessment from Nate Silver, at his site fivethirtyeight.com. Silver’s piece is titled “How I Acted Like A Pundit And Screwed Up On Donald Trump” and he goes into excellent detail about the statistical methods that worked (or sometimes didn’t work) in his predictions on this year’s races to date.

Forthright assessments are a hallmark of serious scholars, and I commend this to you as a great example. This should be all the more interesting to some on campus since of course Silver had been one of the First Year Book authors on our campus.

 Posted by at 11:06 am on May 19, 2016
Feb 242016
 

Bad science (and sometimes difficult science done poorly) happens all the time, and periodically we re-blog examples as reminders. This edition’s sampling is titled for a truism quoted in the first article about ways sham scientists plump up their own work and try to dominate in the literature: “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”

In the ‘difficult science done poorly’ category we have another reminder from a nice piece in the Atlantic about challenges in reproducibility of studies.

But what’s a reminder like this without one of the old standby topics, the publication of utter drivel.

Let’s be a bit more discriminating out there when it comes to what we might believe in the literature.

 Posted by at 6:22 pm on February 24, 2016