Standards of Accuracy in Historical Scholarship

Daniel Feller's picture

Four years ago, disturbed by what appeared to be a growing trend of careless source citation in works of scholarship, I wrote a comment and sent it to H-SHEAR.  It ran under the heading “Citations and Standards” (April 26, 2011) and generated some interesting discussion.

I was put in mind of this bit of history while reading two recent big books from major publishers, Walter Johnson’s River of Dark Dreams: Slavery and Empire in the Cotton Kingdom (Harvard, 2013) and Edward E. Baptist’s The Half Has Never Been Told: Slavery and the Making of American Capitalism (Basic, 2014).  Both have been widely acclaimed, and Johnson won last year’s SHEAR book prize.  Slavery and capitalism are the major themes of both books, but both also stake large interpretive claims concerning antebellum politics and the approach to Civil War.  Both are heavily documented: 86 pages of endnotes in Johnson’s book, 60 in Baptist’s.

Statements coming from such distinguished sources command respect.  But by very virtue of their importance they also invite scrutiny, not only for the persuasiveness of their argument but for the soundness of the evidence on which that argument rests.  It is surely not unreasonable to expect from master historians the same standard of care required in a well-vetted dissertation monograph.

Even before reaching the heart of Johnson’s political analysis, I was struck by an arresting ten-line block quotation on page 318, attributed to Senator Andrew Butler of South Carolina.  It was arresting because it was eerily familiar: Johnson had used a shorter version of the same quotation to open the chapter just fifteen pages earlier (p. 303).  But there he gave its author not as Butler but as one Leon Fragua de Calvo.  Johnson’s footnote was of no help in sorting out who really wrote this passage, nor in determining which of the two versions he presents (they differ by two words and five commas) is actually correct.

On page 373 Johnson tackles antebellum politics directly:

“As at other moments of crisis—particularly the South Carolina Nullification crisis of 1831 (the refusal of the state of South Carolina to enforce the federal tariff, a problem that was eventually resolved only with Andrew Jackson’s threat to use the United States Army to invade the Palmetto State) and the Virginia slave emancipation debates of 1832—by the late 1850s several strains of thought that fed the ideological identification of ‘the South’ with slavery and slaveholding were beginning to produce rogue strains that threatened to metastasize into a real threat to slaveholding power.  If the geographic dimensions of politics of slavery in the 1850s (the fight over the West, which culminated in the state-for-state Compromise of 1850; the fight over Kansas and the doctrine of ‘popular sovereignty’ for territories becoming states; the Kansas-Nebraska Act) made it inevitable that the defenders of slavery would come to think of their struggle in increasingly sectional terms, it also provided a frame that called attention to variation within the supposedly uniform space of ‘the slaveholding South.’”

Regarding this passage:

1. The nullification crisis was in 1832–33, not 1831.

2. South Carolina did not refuse to enforce the tariff.  It was not and is not the province of state governments to enforce federal laws.  Nullification meant preventing the federal government from enforcing its own laws, which is something quite different.

3. While Jackson did unofficially threaten invasion, it was the passage of a compromise tariff in Congress, not Jackson's threat, that is with reason usually credited for resolving the crisis.

4. There was no “state-for-state Compromise” in 1850.  Indeed, the very unavailability of that Missouri-like solution was part of the problem that needed compromising.  There was no slave territory ready for admission to accompany free-state California.  The Compromise of 1850 did not maintain the balance of free and slave states.  It broke it, and many expected that, once it was first broken, the imbalance would widen.  They were correct.  Minnesota joined the Union in 1858 and Oregon in 1859.  No slave state was admitted after Texas in 1845.  The Compromise of 1850 balanced California’s admission not with a new slave state but with other concessions to Southerners, notably the notorious Fugitive Slave Act.

5. The popular sovereignty doctrine did not concern “territories becoming states.”  It concerned territories before they became states, perhaps even before they became formal territories.  After Missouri, politicians in both sections generally acknowledged the unhindered right of citizens to choose freedom or slavery at the moment of becoming a state, even for citizens of a free territory to make a slave state.  Abraham Lincoln conceded that point directly in his 1858 debates with Stephen Douglas.  The popular sovereignty doctrine concerned the ability of territorial legislatures or citizens to preempt that choice by sanctioning or interdicting slavery at an earlier stage.  The distinction is not arcane: through much of the 1850s, the question of when or how territorial denizens could act on slavery well in advance of statehood formed the very crux of controversy.

That’s five consequential misstatements in two (overlong) sentences—and this from a historian who does not shy from lecturing us, even in this same paragraph, on how we have misapprehended the coming of the Civil War.

Edward Baptist begins the story of Andrew Jackson’s presidential inauguration by telling of “a man who on March 5, 1829, woke up aching in Washington, DC.  . . . Andrew Jackson’s wiry old body felt the frost. . . . Now, as Jackson rose to his feet, a slave waiting outside the door heard the old man and entered the room.  A few minutes later, the president-elect emerged” and went down to breakfast (p. 224).  Every detail in this scene is imaginary except the one that’s wrong: inauguration day was March 4, not March 5.   Baptist then turns to Jackson’s inaugural address, which he discusses in four paragraphs, each individually footnoted and devoted to a separate topic: Indian relations, foreign affairs, the tariff, and the patronage (pp. 227–28).  As Baptist relates:

“First, Jackson announced that he planned to address the Indian issue according to the ‘feelings’ of his countrymen.  Almost 50,000 native people still lived on and held title to 100 million acres of land in Georgia, Alabama, Mississippi, and Florida.  The ‘feeling’ of Jackson’s countrymen was that they wanted that land in order to launch expanded cotton-and-slavery-induced booms.”

Whatever Jackson may have felt, this is not what he said.  Here is the full text of his paragraph on the Indians, the ninth (not first) of the address:

“It will be my sincere and constant desire to observe toward the Indian tribes within our limits a just and liberal policy, and to give that humane and considerate attention to their rights and their wants which is consistent with the habits of our Government and the feelings of our people.”

On the fourth issue, Jackson’s call for patronage “reform,” Baptist says “we know that the president was more concerned about the Second Bank of the United States,” even though “he left the harshest B.U.S. lines out of his inaugural address.”

One cannot leave out what was never there to put in.  Jackson would, within months, vigorously attack the Bank.  But as of March 1829 he had not said one word against it, either in public or private, and there was as yet no intimation that he would.  If “we know” that he was more concerned about it than he was about the patronage, we know more than he did himself.

The point is more than semantic, for there were things that were indeed left out of the inaugural address.  We have the first draft of it in Jackson’s own hand.  It proposed the distribution of surplus federal revenue among the states “for purposes of education & internal improvement.”  This recommendation was elaborated in a reworked draft by Jackson’s nephew and secretary Andrew Jackson Donelson, but then excised entirely from the final text.  Neither Jackson’s draft nor Donelson’s said anything about the Bank (The Papers of Andrew Jackson: Volume VII, 1829, pp. 74–79).

How could Baptist go so far astray?  The text of Jackson’s inaugural is absurdly easy to find.  It is posted on various websites, all cribbed more or less faithfully from the standard printing in James D. Richardson’s Compilation of the Messages and Papers of the Presidents, which is itself accessible online.  Jackson’s draft is not in Richardson, but it is in The Papers of Andrew Jackson and also in John Spencer Bassett’s earlier Correspondence of Andrew Jackson, both of which Baptist cites elsewhere for other purposes.

Not one of Baptist’s four footnotes to his paragraphs on the inaugural address cites the address itself, in any printing or at any location.  His third footnote, to the paragraph on the tariff, reads in its entirety as follows:

“25. NR [Niles’ Register], March 8, 1828, 19–22.  Historians still argue about whether or not the plot existed, and if so, what it entailed: Michael P. Johnson, “Denmark Vesey and His Co-Conspirators,” William and Mary Quarterly, 3rd ser., vol. 58, no. 4 (2001): 915–976; James O’Neil Spady, “Power and Confession: On the Credibility of the Earliest Reports of the Denmark Vesey Conspiracy,” William and Mary Quarterly, 3rd ser., vol. 68 (2011): 287–304.”

The cited Niles’ Register pages have nothing to do with either Jackson’s inaugural or the Denmark Vesey conspiracy.  Elsewhere, Baptist puts the Indian Removal Act in the wrong year, misquotes a famous passage from Jackson’s Bank veto message in 1832, and touts the black soldiers of the “famous 52nd Massachusetts Regiment” (pp. 265, 250, 402).

What is striking about these errors is not so much their magnitude as their obviousness.  It is not all that hard to get these details right.  So does getting them right not matter?  Are they merely incidental slipups, or tips of a larger iceberg of carelessness or ignorance?  At what point does factual inaccuracy undermine authorial credibility?  It seems to me now, as it did four years ago, that such questions need to be asked, because they speak to the very integrity of the historical enterprise.

Daniel Feller

University of Tennessee

Dear Daniel Feller,

I appreciate your post, and I share your concern. In recent posts on the Scholarly Kitchen and on the Omohundro Institute's Uncommon Sense I wrote about the intensive process of producing and publishing historical scholarship. In the first I referenced Charles Seife's chilling expose of the errors in peer-reviewed and published FDA-funded research and wondered whether, though many people might be up in arms about errors in food and drug research, it isn't just as important to get it right when it comes to historical scholarship.

But I think you've got the wrong target. Those are two fine books you reference, by accomplished, serious and dedicated scholars. The errors do not to my mind indicate sloppy research or carelessness about argument. Rather, they suggest that the hard (and costly) processes of editing and checking in production are in short supply. These are distinct labors, best undertaken by people with training and talent.

But book publishing is a costly business. I have thoughts about this, too; for now I'll just note that no author can be her own editor, fact-checker, and/ or proofreader.

Thank you for raising these issues-- I think they're vital for the profession to address.

Karin Wulf

I agree with Karin Wulf. I've discussed this issue over the years with colleagues, some of whom have been quite candid with me about their surprise and dismay at the relative lack of careful checking and reviewing of their book manuscripts by commercial and university publishers alike. My experience has been pretty good with such matters, but I've seen a few errors caught in my own work, which is dismaying and disappointing. Karin Wulf is absolutely right -- we cannot be our own editors, fact-checkers, and proofreaders.

We know that publishing has cut back on various things that publishers used to do for authors -- proofreading, preparing indexes, and so forth. Much copy-editing is now distributed to badly-paid freelance editors (often former employees of publishing firms), who scrape by by copy-editing as many manuscripts per week as they can manage. In such a situation, errors are bound to creep through the sieve. Most authors now index their own books rather than front the cost of hiring an overworked and underpaid indexer (I'm one of those authors).

Respectfully submitted,

R. B. Bernstein
Colin Powell School, City College of New York
and
New York Law School
rbbernstein@gmail.com, rbernstein@ccny.cuny.edu, rbernstein@nyls.edu

Professor Feller raises several questions which badly need raising.and Professor Wulf walks around a couple of them. Confusing one of the central points of the Compromise of 1820 with the Compromise of 1850 may not be "sloppy" or "carelessness" but if it isn't what than pray tell is it? Or is it that the thesis is all important and the presence or abscence of supporting data is immaterial. But whatever it is the fact that it appeared in the manuscript is not the work of either the publisher, the fact-checker, or the copy editor. And does the presence of such errors undermine the author's argument? To me the answer is yes. If indeed the devil is in the details than one should accept the premise that the details are important.

Or are we dealing with yet another example of what one of my colleagues calls "Post-Factual History"and Professor Feller's problem along with mine is that we are "Pre-Post-Factual Historians?" It might be fun to see how this plays out.

G. L. Seligmann
Dept of History
Univ. of North Texas

Dear Karin Wulf,

While I agree that some of the errors are minor (such as the inaugural date as March 4th rather than 5th), others (such as blatantly getting the Compromise of 1850 wrong) are a bit more concerning. As you stated, these are great books by serious and dedicated scholars but that makes them the right target. If some of our best and brightest scholars are making errors that could have been remedied by merely reading the first paragraph of a Wikipedia entry on the Compromise of 1850 (which is far from an obscure historical moment in the 19th century btw), I think Mr. Feller has reason to be concerned.

Paul Warden
Graduate Student
Department of History, UCSB

So far the discussion has proceeded on the premise that errors in factual content and citation are more common in historical scholarship today than they were twenty, forty, or sixty years ago. Do we have any evidence beyond personal impressions and anecdote to support this premise? One of the reasons we include citations in historical scholarship is so that later scholars can trace our evidence paths and not repeat our mistakes. That factual and citation errors occur even in books written by prestigious scholars and published by the most highly regarded academic presses is a lesson I learned decades ago very early in the dissertation research process.

For those fortunate enough to see their works go through multiple editions, there are opportunities to correct these errors. One potential great benefit of the ebook revolution is the possibility that errors in monographs could be continually corrected and updated at minimal cost. Authors of Kindle ebooks, for example, have the ability to make corrections and updates to their work and have them sent to all previous purchasers of the book.

While it might be helpful for an editor or fact checker to catch we historians in our errors, it is simply not going to happen. Indeed, the position of "fact checker" does not exist in book publishing (as distinct from magazine publishing). And how could an editor ever catch any of the errors Professor Feller lists? Who is the expert on the subject? It is the author, not the editor.

Perhaps the fact that some historians believe their publishers will catch their errors is part of the problem. Alas, the responsibility is all ours.

Every book has errors. I know from publishing two books that, despite my best efforts, I did not avoid them. But if perfection eludes us, trying as hard as we can to catch our own errors is still essential.

As for the argument that facts are not really facts, I do not know what to say. That argument is for folks in literature, not for us.

Louise W. Knight
Visiting Scholar
Gender and Sexuality Studies Program
Northwestern University
www.louisewknight.com

Hi:

I am a grad student concentrating in Early American Studies. I have had a couple of papers published on early American history. I wrote a Master's Thesis in Interdisciplinary Studies on on Caribbean Identity. I read this discussion with interest. I was particularly interested in the statement by Karin Wulf. "But book publishing is a costly business. I have thoughts about this, too; for now I'll just note that no author can be her own editor, fact-checker, and/ or proofreader."

Admittedly, I enter this discussion from a lowly status. However, I consider myself part of the academic community and I reacted strongly to this statement, in regard to the phrase "fact-checker."

It has been my belief, with both my thesis and my articles, that the buck for accuracy stops with me, and I have proceeded on that basis. It is hard work, but if one enters the lists of research, it should be correct. My articles were peer-reviewed, and in a few instances heavily challenged factually by unknown experts. In one instance, I had indeed made a careless mistake and gratefully accepted the correction. The rest of the time I had not, and was able to justify my statements and conclusions to my editors. I worked very hard to make sure that everything I quoted was correctly attributed. My "take" on the history was my own, but in terms of facts, I knew that if I were incorrect, I was not a good scholar, penalties aside.

I personally don't think any researcher, at any level, can or should depend upon anyone else to "correct" the research. Mistakes happen, but they should be few and far between.

Sharon Tevis Finch
Wayne State University

I have published a lot. Sometimes there are errors of fact in my work. Sometimes I later find out I was wrong about something. I try to correct later discovered errors in subsequent publications. But my errors are MY errors. They are not the result of harried publishers who failed to cite check my work. I am happy when good copy editors do that. And I am delighted that the publishers I have worked with use skilled and competent copy editors, but in the end, we are responsible for work that has our name on it.

Professor Wulf argues that errors, such as not knowing the difference between the Compromise of 1850 and 1820 or not knowing that the federal government (not the states) enforce a federal tariff are not examples of "sloppy research or carelessness about argument." d like to blame the publisher. But it is not the job of the publisher to correct the errors of the author.

On the contrary, let us hope that they are *only* examples of sloppy work or carelessness. Alternative explanations would be far worse.

Paul Finkelman
Senior Fellow
Penn Program on Democracy, Citizenship, and Constitutionalism
University of Pennsylvania
and
Scholar-in-Residence
National Constitution Center
Philadelphia, Pennsylvania

For another way of thinking about and framing these important issues, albeit with a different literature, SHEARites could do worse than read Mich Kachun's fine essay "Antebellum African Americans, Public commemoration, and the Haitian Revolution," which ran in JER 26:2.

I'm grateful to Dan Feller for initiating this thread, and certainly to Sharon Finch as well for her spirited contribution. Looking to echo Paul Finkelman, too.

What I publish is on me. I'll take all the help I can get, from rave reviews of manuscripts I submit somewhere, and just as much (all right, even more, even much more) for hostile but righteous comments that force me to clarify (or if necessary correct) what I've said and had planned to publish. But in the end it's on me to get it right. From time to time I get something wrong. In my most recent book, on Loving v. Virginia, for example, I track two or three errors that had made their way into my earlier publications over many years on the subject, and thus alert readers for whom it might matter that my facts have changed.

I do think, of course, that readers of manuscripts, to be responsible, must have some authority for reviewing other folks' work, thus be in a position to gauge the reliability as well as the significance of what they're reading, and must point out errors they run into. That's central to the peer-reviewing process, seems to me. And I see a distressing number of errors of all sorts (not to mention examples of plagiarism) make their way past all the screens that one wishes would be operating effectively.

But in the end it is on the putative author of a piece of scholarship to send out into the world something that really has a claim on getting out there. Just saying,

Peter Wallenstein
Professor of History
Virginia Tech

Thanks to Cathy Kelly for recommending my JER piece. I have to share that--after chastising my peers for carelessness or over-reliance on secondary sources--someone contacted me to let me know I had made a blatant error in that article regarding the terms of France's policy toward Haiti.

I have therefore been following this conversation with interest, and agree with the idea that we historians--not editors or fact-checkers--must take responsibility for what we publish or present. I do wonder, as someone else on the thread asked, whether these kinds of errors are more prevalent in recent years than they had been, say, a generation or two ago. Any thoughts?

I'm reminded of the reactions to "You didn't build that," Elizabeth Warren's, and then President Obama's, effort to put individual entrepreneurs within the critically important contexts of the infrastructures that support them. While none of this contextualizing was meant to imply that the entrepreneur isn't centrally responsible for their own efforts and actions, a key point was that the invisibility of that infrastructure in the rhetoric of success (and failure) is costly. Among other things, to Richard Bernstein's point, you can devalue and then defund it more easily.

I'm cheered by the number of emails I've received from colleagues who are looking forward to having their graduate students participate in editing and checking exercises, both on one another's work and on their assigned reading. I think that will reinforce both how important it is to take care in one's work, and also how crucial it is to have multiple eyes (and pens) on a project at every stage. If there's a good take away here, it is that the rigorous production of scholarship is a strongly shared value.

Karin Wulf

While certainly we should all strive to make sure every "fact" in our books and articles are correct, as more than one person in this thread has acknowledged, despite our best efforts, most of our books and articles have a minor error here and there. I would imagine this, too, was the case in that longed for but imaginary past when facts were facts and interpretation was for other disciplines. So I'm not quite sure why Baptist and Johnson are not given the benefit of the doubt here. While I'm sure Johnson would agree that his description of the Compromise of 1850 is not quite accurate, I'm also fairly certain that one of the premier historians of nineteenth-century slavery probably understands the Compromise and that that was a mistake, a product perhaps of draft after draft merging, paragraphs changing, and a mistake being overlooked. And while I don't think Karin Wulf was suggesting that every error would be caught by a more rigorous and well-funded editorial process, I don't see how anyone who has published something in the last decade or so has not had some experience with the defunding and/or outsourcing of editorial work. We can all accept responsibility for any errors in our books and articles without holding on to some heroic fantasy of the all powerful author who catches all errors as a result of real rigor and dedication.

But I think Professor Feller's examples speak to the fact that interpretation is always part of determining fact - if Johnson may have made an error, I think for the most part the concerns raised about Baptist are concerns of rhetoric, style, and interpretation, not necessarily of fact. One can argue over the limits of how we imagine the past - and we all imagine the past in some sense, that is what constitutes the writing of history - but I don't think we can pretend that everything is a matter of fact. One need not read Hayden White or Michel de Certeau to get come to this conclusion (although I would suggest that everyone should) - just read E.H. Carr or R.G. Collingwood.

I may sound like a postmodern apostate here, but sometimes I think our field could use a bit more, rather than less, of that kind of thinking.

I too think this is a valuable discussion and that authors are ultimately responsible for the quality of their scholarship. Errors creep in at every stage from typos made in research notes (and then perhaps auto-corrected by word-processing programs) to cutting and pasting across multiple drafts and platforms. The sort of errors being pointed to can't be addressed by more technology or by scholars juggling too many things at once.

So, I wonder about more structural factors that shape the amount of scholarship published and the time that can be devoted to 'getting it right.' In the Canadian context at least, there was tremendous pressure to expand the size of graduate programs while curbing time-to-completion, often stretching the capacity of supervisors and supervisory committees as well as the candidates. Among the consequences was too many people chasing too few academic jobs at a time when the managerial university obsesses about counting the number of peer-reviewed publications, often with little concern about quality. Numbers of publications that were once unheard of (or would have raised suspicion even) are now needed to get a post-doc, a job, a better job, some form of job security, and to meet metrics that can't help but see things into print without the time and care that the best and most lasting scholarship requires.

I'm amazed at how much great history is still published and how conscientious most peer-reviewers still are, but factual errors may be only one symptom of the effects on scholarship of increased pressures to produce more faster.

This has indeed been a fascinating discussion. I want to respond to Mitch's reiteration of a previous question about whether these kinds of errors are more prevalent than they have been in the past. I'm wondering what the implications are underneath the question. Are we asking if newer historians are just not as competent, focused, reliable or . . . (whatever word fits there) as historians say a generation ago were? Are we/they being trained differently? OR are we asking if, in fact, newer historians are under WAY too much pressure to publish quickly and often (along with high teaching loads and time-consuming service responsibilities) and thus such errors are bound to occur?

Vivian Bruce Conger
Associate Professor and Robert Ryan Professor in the Humanities
Ithaca College

Mitch Kachun joins several writers here in asking  "whether these kinds of errors are more prevalent in recent years than they had been, say, a generation or two ago."

My fellow Wisconsin alumnus Dan Feller might remember this extract of a mini-memoir circulated among historians and grad students at the University of Wisconsin in 2009.  Its author, a well-regarded scholar, describes William Hessetline's research seminar ca. 1949, which required students to fact-check famous historians' work:

Bill Hesseltine’s graduate-methods seminar won both plaudits and brickbats from his colleagues: praise because students left it with a robust sense of the frailty of historical truth, rancor because they themselves were the guinea pigs whose publications served as Hesseltine’s damning evidence. Our first assignment was to vet an essay in a major scholarly journal, written by a tenured staff colleague, for simple accuracy –– word for word of every quotation, the spelling of each name, place, title, date, publisher in text and notes, and pagination for every reference. Moreover, wherever a secondary source was cited we were required to check its conformity with the original. Finally, we were told to itemize the nature and tabulate the number of errors, and to allocate the essay an overall rate of inaccuracy. All of us were astounded –– and in scrutinizing trusted mentors appalled –– to find rates of error seldom below 50 percent, and often as high as 80 percent.

            Our next task was to review the same essays to determine how and why the faults had arisen. In many cases, reliance on a secondary source without consulting the cited original meant repeating an initial omission or error verbatim. The great majority of the errors that we found, however, stemmed from what looked like simple carelessness. Our slipshod author had erroneously or incompletely transcribed his archival or library notes, and –– this proved the crucial fault –– had not taken the precaution prior to paper submission or proof-reading to recheck his sources. [I use the masculine pronoun advisedly, for there then no women on Wisconsin’s history faculty.]

            We were left shaken and aghast by our elders’ and betters’ manifest shortcomings. But we gained from them cautionary reminders that we strove to take to heart in our own work. In particular, we realized that error in any transcription was not the rare exception but the common rule; that no scholar however painstakingly scrupulous was immune to hasty or careless lapses; and that cutting corners by relying on secondary sources almost always compounded mistaken use of sources and foreclosed potentially fruitful insights.

            But the most consequential lesson that emerged from these lapses, going right to the heart of the integrity of history writing, was yet to be revealed. Our next assignment, demanding a month of research and analysis, was to come up with a reasoned judgment about whether [and if so why] these errors really mattered. Although numerous and regrettable, most of them were, after all, matters of detail –– a wrong page number, a misspelled name, a misdated publication –– trivial details that would not materially affect the author’s conclusions nor more than minimally vex most readers. They were easily rectified with small harm done. Given the constraints on scholars’ creative time and energy, was it therefore not best to forgive these lapses? A cursory apology by their busy perpetrator might suffice, along the line of the eminent historian E H. Gombrich’s response, when caught out in some minor misstatement: ‘mea culpa, mea minima culpa.’

            Alas, this proved not to be a viable conclusion. The further we delved into the matter as well as the language of our authors’ essays, the more such minor peccadillos served, like coal miners’ canaries, to alert us to fundamental and grievous faults. Failure to go back to an original source, for example, left the author at the mercy of some intermediary scholar’s imperfect or slanted reading. Moreover, seeing the full original often called into doubt the meaning or context of the quoted or cited statement advanced by our second-hand user. Still more, original sources often revealed pertinent data unseen by and unknown to our lazy secondary author. 

            Even more detrimental were our hapless guinea pigs’ failure to recheck their sources before publication. For the resultant errors went far beyond simple transcription mistakes. Reviewing these sources, we soon saw that our authors often misconstrued, either initially or later, the meaning or import of their research data. They had used that material to support their own conclusions, while ignoring or slighting contrary evidence and alternative viewpoints in the same sources, often in the same paragraphs. Only rereading those sources before going to press could have alerted them to the selectivity trap they had fallen into. For in the process of writing and rewriting, historians, no less than other scholars, are ever prone unconsciously to select from and alter evidence for the sake of coherence, consistency, and credibility.

            Absent a final close reading of source materials, such deformations, we were forced to conclude, are inescapable. Every historian unknowingly makes things up while writing and revising, selecting, omitting, and reshaping data to make an argument clear, a point vivid, a conclusion indubitable, both to himself and to an intended audience. We had been rightly schooled to abhor deliberate bias, knowing nonetheless that objectivity was ever at best a noble dream. But none of us had realized quite how far unconscious bias suffused the entire process of gathering and using sources, let alone how important it was –– and how much work it took –– to curtail that bias.

            These findings affected us in three important ways. First, they warned us to view with extra caution the veracity and conclusions of historians given to manifold, even if seemingly minor, carelessness. Second, they were priceless and invaluable reminders in our own doctoral research, time-consuming and costly as adhering to them proved to be. I vividly recall the additional final week I spent in the National Archives in Washington. I knew I had not only to check spelling and other defects in my initial transcripts and synopses of my biographee George Perkins Marsh’s fifteen hundred-odd diplomatic despatches from Ottoman Empire and Italy between the l850s and the 1880s. I also needed to reread those despatches for their content in their entirely, to see and gauge what my penultimate thesis draft had omitted, scanted, or misinterpreted.

            The third lesson was the most sobering. However much we took these cautionary principles to heart, however often we vowed to adhere scrupulously to their tenets, we soon came to realize that we would never unfailingly do so. Indeed, our lapses like our mentors’ were bound to become more and more numerous the busier our careers. How many historians take the time, even supposing they have the resources, to recheck every source before publication? Who faithfully ferrets out every original source from its secondary use, especially when the ‘original’ turns out to be another doubtless dubious secondary? A scholarly task would never be complete. So we know we fall inexcusably short.

            This mortifying knowledge should engender a proper humility. To realize that historians’ deficiencies are personal as well as structural is the shock of an ice-cold shower. To the genre’s own insuperable defects –– data that are never complete, impassable gulf between actual pasts and any accounts of them, bias stemming from temporal distance, from hindsight, and from narrative needs –– we must remember to add, and to keep in mind, human frailty. Hence we rightly, humbly accede to perpetual revision of our work, not only because new data keep coming to light, and new insights keep arising, and the passage of time alters the outcomes of our histories, but also because we have good cause to admit that we can never wholly live up to the demanding tenets of our trade.

            We need not be ashamed, therefore, that some successor is apt to disclose our unwitting mistakes and lay bare their sorry consequences for historical truth. We are only duty-bound to minimize such errors as far as reasonably possible. And to impart to our own students the invaluable lessons bequeathed us by William B. Hesseltine. Decades on, Hesseltine’s ‘Historians Ten Commandments’, a potpourri of still relevant humane advice, came to light. Notable among them, not surprisingly, was ‘thou shalt not quote from secondary sources.’ Other anathemas forbade the passive voice and the present tense, and designating persons by their last names only. And against the rising tide of pompous impedimenta, insisted Hesseltine, do ‘not discuss thy methodology,’ ‘write about thy subject and not about the documents concerning thy subject,’ and ‘fight all thy battles in the footnotes.’ Finally –– so pertinent in today’s cult of apology and forgiveness, of wiedergutmachung, making good again, which tempts historians to turn moralist ––  ‘thou shalt not pass judgments on mankind in general nor shall thou pardon anyone for anything.’

good wishes,

Peter

Peter's reply is illuminating. It suggests that errors have always been around -- which we all know. But it also raises the more interesting question: Is anyone training graduate students in this way. I can recall being dress down by John Hope Franklin, in his thoroughly regal and kind manner, for citing to census material from secondary source, rather than looking at the actual census volumes (we used books in those days!). He asked me how I knew that the secondary sources accurately reported the census numbers. I of course had no answer.

Since then, I go to the source whenever I can, if I don't I try to cite sources "as cited by xxx, in yyy." This not only allows us to acknowledge those we have learned from (and whose research we have used), but also to give us a little cover if the citation was wrong. I am still responsible for any error, but at least I would have documented where I went wrong and why.

Again, I reiterate. Great copy editors are wonderful. So are readers of our manuscripts. I acknowledge both in my work. But we do a profound disservice to our craft and our profession to blame others for our mistakes. In doing so we undermined our credibility.

I would ask those who are focusing on copy editors and the resources of university presses why they would trust the accuracy of their work to an editor who probably does not have the same graduate training or the knowledge of the subject as the author?

On the other hand, there is a huge problem with readers not actually critiquing articles and books for scholarly journals and publishers and worse yet, commercial publishers not even sending history books out for review. In the last decade or two we have been deeply embarrassed by awarding prizes to books that were not vetted by publishers and turned out to contain not merely mistakes (like what the Compromises of 1820 and 1850 did) but far more serious errors including research based on archives that did not in fact exist.

As historians we interpret the past and help explain the present. But our credibility begins with a careful statement of known (and discovered) information and the ability to document these facts.

I would urge Professor Wulf to use her position at the Omohundro Institute to organize a conference on these issues subject so we can a lively and civilized discussion about the standards of the profession. It is a truly important issue.

Paul Finkelman
Senior Fellow
Penn Program on Democracy, Citizenship, and Constitutionalism
University of Pennsylvania
and
Scholar-in-Residence
National Constitution Center
Philadelphia, Pennsylvania
paul.finkelman@yahoo.com

The comments on this thought-provoking thread have brought out a number of different issues concerning historical accuracy. If I may, let me add one more. Part of maintaining historical accuracy involves keeping current in the scholarship and integrating new, different, and even competing interpretations in one's work. An example: in the initial post, Professor Feller takes issue with Johnson's comments about popular sovereignty concerning "territories becoming states." Feller argues that Johnson has mischaracterized the doctrine. Not necessarily. I wrote a dissertation and a book on this subject, and in my research I found that northerners and southerners had competing definitions of popular sovereignty. Especially after 1854, southerners argued that popular sovereignty meant that a territory had the right to determine the status of slavery only in preparation for admission to the Union. Northerners argued as Feller indicates, that the "popular sovereignty doctrine concerned the ability of territorial legislatures or citizens to preempt that choice by sanctioning or interdicting slavery at an earlier stage." The distinction I'm proposing is hardly "arcane" either. Northerners and southerners (but especially southerners) repeatedly altered their definitions of popular sovereignty for political expediency. After all, why would a proslavery southerner support popular sovereignty if it virtually ensured an antislavery outcome?

Point being, it's quite easy to make errors or omissions of interpretation if our understanding of the fields in which we work aren't current. As for basic factual errors, others have already made the compelling case that the buck stops with scholars. Having said that, and knowing that I had a superb editor on my book, I cringe to think of my own errors. Mea maxima culpa!

Christopher Childers
Assistant Professor of History
Benedictine College