Lear and the Location of Literary Criticism

Shakespeare's Tragedy of King Lear has been important to literary criticism, especially the contingent that worries about being wronged and about wronging others. It is in the discussion of Lear that the terms “data” and “critical data” show up in literary studies for the first time: “data” in an 1819 lecture by Samuel Coleridge and “critical data” in an influential 1987 essay by Stanley Cavell. Both critics saw in Lear an agonized relationship to the scientific method and other tenets of empiricism.
In Lear intelligent design could be objectively proven. One could point to the play’s economical selection of critical data, or to the exact order in which that data became available, or remained unavailable. It could be seen in the play’s appreciation of the difference between a bag-of-words model of literature and a sequence-based model of literature. It could be seen in the play’s comprehensive understanding of the predicament it portrayed, first to last, from its innermost heart to the outermost exigencies of print and theater. Lear also successfully triggered the responses — such as over-identification, projection, transference, and punitiveness— that cause readers and playgoers to gravely misremember the plot and even stick with those mistakes. Lear is about this kind of vulnerability and has been probed for this vulnerability at least since Coleridge. No other work of literature has placed as much stress on interpretive validity. This talk will expand on the spot to which Lear pins literary criticism, the spot where data becomes inference, first described as significant and troublesome in Locke’s Essay Concerning Human Understanding.
Nan Z. Da teaches at Johns Hopkins University. She is the author of Intransitive Encounters, published by Columbia University Press in 2018, and The Chinese Tragedy of King Lear, recently published by Princeton University Press.
This event is part of the For the Humanities series. Registration is not required.