Evaluation, Academia, and H-Net: Do you "know"? Or do you KNOW?

H-Net Vice President of Networks's picture

I mentioned in an earlier post that we are in the process of crafting H-Net’s next Strategic Plan for the next five years. As part of that process, some of us are dwelling on how to evaluate H-Net’s successes and failures, and what we can learn about our performance to help us make decisions about our future.

Like many people who teach college courses, I find academia doesn’t reliably pay enough to live on, so I have one foot outside of the academy in an ironically more profitable non-profit. My non-academic foot works in a department of Research and Evaluation at a mid-sized museum. It’s a decent gig. Most of my work is on the Research side, but it has been eye-opening to explore the world of evaluation and learn some differences between humanities and social science research in academia and the “applied research” of evaluation. (And the similarities: eval and a lot of humanities/social science research done in universities are both done by researchers but not “appliers” and both often end up being “potentially applicable research” but not “research that is actually applied.”) Evaluation is less exploratory than academic research, there is less allowance for letting the field inform you, with much greater focus on answering very specific questions. It's very clear what evaluators want to know and how they want their findings to be used. When done properly (and it isn't always), evaluation is intensely data-driven and focussed on defining actionable measures to make change.

Raising the possibility of evaluation at H-Net is tricky business because many of us think of H-Net as an academic organization and evaluation doesn’t happen much in academia. But a little thoughtful eval in both H-Net and academia can clean up all those times we talk about what we “know” versus what we KNOW, knowing in quotes versus knowing in caps. Here’s a conversation that cropped up on H-SHEAR about academic conferences that proves my point. In the discussion, the chair of an upcoming conference talks about factors in “successful proposals” for panels and part of what spins out is a discussion of making a successful conference. Some of the ideas offered for solid panels are: its members should not all come from the same department; members speaking from different disciplines is a virtue; reading papers is boring; PowerPoint is overused (or perhaps, from a capability standpoint, underused). I’d suggest many academic conference chairs “know” these are all correct, but no one really KNOWS if they are correct because no one ever does a systematic inquiry, no one determines the goals or what defines "a successful conference" or makes much effort to find out if one has happened. Rarely does anyone collect and analyze data after the final conference cocktail party. Consequently, we settle on a sort of "Best Practices" that may or may not really be the best: every conference has panels, each panel has three speakers who each speak for 20 minutes, then there’s Q&A for 15 minutes if the speakers haven’t gone over (and they have). We do this at every conference not because careful inquiry proves that specific, thoughtful goals are definitively accomplished this way. We do conferences this way because we have always done conferences this way. Like much of academia, we have simply naturalized a structure and gradually convinced ourselves that 20 minutes is long enough but not too long, that panels are more informative than speakers or films, that panelist are better if they come from different places, and that three is a magic number.

To get specific, one member of the conversation says, “One of the continuing virtues of conferences is the opportunity to share scholarship with colleagues from around the country and internationally.” I’d say this is the commonly accepted raison d’être for academic conferences, and we all “know” it to be true. But if that’s the point of a conference, an evaluation would ask, “How much sharing actually happens?” According to the program of the conference under discussion, there will be 54 sessions, most happening concurrently (five at a time!), with panels of 3-4 speakers each, over 4 days and the accompanying expensive hotel nights many won’t stick around for. An evaluation might ask “How many people are in the audiences of those panels?” (I’d guess we all “know” the answer is probably around ten or less at most conferences of this size.) So is there really that much sharing of knowledge? A lot of knowledge will be spoken, but not many will hear much of it, and any one attendee will only hear a small fraction of it all. A worthwhile question might also be, “If sharing research is what we’re trying to do, are there more effective, less expensive, and widely accessible ways of accomplishing that goal?” Anyone reading this blog post on a computer KNOWS the answer. A bit of data collection could show the difference between successfully exchanging knowledge and research as we need to do in academia and doing things “because that’s how we do it.” Until we do the research, we “know” conferences are wonderful, but we do not KNOW how effective they really are, or for whom. 

Who knows why academics don’t evaluate. Maybe it’s because many of us are trained to be researchers first, teachers second, and administrators, well, that’s just an expectation to be fulfilled without training (the same could often be said for teaching). Or maybe it’s self-preservation: questions lead to answers, and the status quo is working for someone. And maybe something about it all sounds to business-y: “data driven decision making” and all things eval is just to corporate-sounding for the lifers-of-the-mind. I’d argue that a little more assessment of our goals and successes and a little less of just knowing we’re doing alright could prevent things like what the University of Wisconsin is experiencing now.

But to get back to my point…the quotes-vs.-caps dichotomy exists at H-Net, too. It’s something I (and others) would like to address in the Strategic Plan and is why I’m writing all this here. I’ve been tooling around this organization of ours for a number of years as subscriber, editor, and VP and have heard countless (which really means “uncounted”) “we know” statements. “We know many subscribers are not academics.” “We know the new interface is easier.”  “We know people like the reviews or Announcements or Jobs or whatever.” “We know people are reading this stuff.” But we don’t really KNOW a lot of things, or we only know things if we keep statements too vague to be meaningful. For example, I know many subscribers are not academics, but I don’t know how many, or why they subscribe, or what content they might be interested in, or if they think the interface is easier, or… And what would be good answers? Integral to evaluation is how success is being defined. What’s success for H-Net? Or for any one H-Net network? I’d like to see just about every organization in academia—conferences, journals, associations, departments, universities, and H-Net—get deliberate about goals and data collection so we can make better decisions about our future. That’s what I’ve been thinking about, and I invite you to do so, too: what do we “know” and what do we KNOW, how can we KNOW more, and how can we use what we KNOW to do better?

Do you do your own form of evaluation on an H-Net Network? Do you have goals for your network? Are there stats or other data you collect to help inform your decision-making? Qualitative data is welcome here! Do you collect testimonials, maybe emails? Would you like to get systematic about it to lend the data more validity? How do you decide what you’re next steps will be, and how will you know when you are successful? Is there data you’d like to see H-Net collect at the organization-wide level and share with editors? Or do you already “know”?