The epistemological basis for the modern critical edition is fundamentally taxonomic: it assumes the notion of prior simplicity, whereby in a vertical fashion the proliferation of textual variants, which are naturally distributed across manuscripts, and are inherent in the very idiosyncratic nature of manuscript production, all descend from an original common source. Also generally assumed is a monogenetic origin from a single parent. Both assumptions prove to be highly problematic for understanding medieval Arabic and Persian book culture. The messy reality of multiple recensions that inhabit medieval manuscripts as testaments to the collaborative process of textual production may be, in part, preserved in the form of a critical apparatus within an edition. In the process of mechanical reproduction, this multivalent record of dissemination is displaced largely into the space of the margins. However, as with any act of translation, the technology of the printed page produces both a surplus and deficit of meaning. While codicological cacophony may be lost or marginalized, what is gained is the ability to telegraph this information to an even broader audience.
In this ever iterated process of surplus and deficit, we have today with many of the digitally searchable forms of Arabic and Persian medieval archival material, the complete removal of the critical apparatus, if one ever existed, and with it any semblance of this polyphonic reception history. Likewise, what is available either digitally, or in print, is usually based on the narrow selection of what has been edited. Significant parts of this reception history have been lost in the nineteenth- and twentieth-century constructions of medieval Arabic and Persian writings. Furthermore, the medium of transmission, from manuscript, to printed page, to searchable text, inevitably shapes not only what information is presented, but how it is accessed; this in turn guides both reading practices and modes of analysis. In this paper I draw on examples from medieval Arabic and Persian manuscripts, along with their print and their digital forms, to explore the process of loss and recovery structuring technologies of transmission.
Author: Travis Zadeh (Haverford College)
Scholarship on the Arab world, as in other regions, is always haunted by the absent voices of those who cannot be heard. Our understanding of events, our perspective on times and places are always skewed by the uneven record that comes to us for interpretation. At first blush it may appear that the spread of internet access and the rise of social media, and in particular Facebook whereby anyone can distribute reams of information and images globally at low or no cost mitigates this problem. However, the rise of such technologies brings their own technical and ethical challenges. I propose to address some of these challenges through a discussion of what I have described as an indigenous digital humanities project: a Facebook group called “Mukhayyam al-sumud al-usturi tal al-Za`tar.” Created by survivors and descendants of the 1976 siege and destruction of the Tal al-Za`tar refugee camp in Beirut, the site aims to serve as a node in the network of former residents of the camp who are now globally dispersed, as well as a depository for images, documents, and crowd-sourced reconstructions of memories and geographies. The site (and others like it) and its contributors may serve as a rich source for scholars interested in creating more authoritative repositories or digital reconstructions of this and other neighborhoods and towns that were erased or irrevocably altered during the violence of the Lebanese civil war. However, they, too, are marked by dominant voices and aesthetics that may skew our understanding of the past.
Author: Nadia Yaqub (Univ. of North Carolina – Chapel Hill)
In 2010, a grant from the New York University Abu Dhabi Institute launched the Library of Arabic Literature, a book series that aims to publish key works of pre-modern and classical Arabic literature in bilingual editions, with the Arabic edition and English translation on facing pages. The General Editor of the series is Philip Kennedy, Associate Professor of Arabic at New York University, who is aided by an eight-member Editorial Board consisting of scholars of Arabic and Islamic studies. The five-year grant envisioned an initial library of thirty-five published books, with translations to be done by scholars of Arabic. It also specified an XML-first production system to ensure maximal flexibility in future digital uses of the Arabic texts and English translations. The series is published by New York University, which drew on its previous experiences in bilingual publishing through the now-defunct Clay Sanskrit Library series.
As Managing Editor of the Library, I will present in this paper an overview of the experiences of the Library of Arabic Literature series in the past two years, particularly with respect to the digitization and XML-tagging of Arabic texts. The first three books have just been published, and we are currently confronting the challenges of rendering Arabic text correctly on commercially available e-readers. Eventually, once we have a critical mass of published books, the Library of Arabic Literature hopes to make the full series accessible as a searchable electronic corpus. In this paper, I hope to highlight and share some of the insights the Editorial Board and I have gleaned through our work on this series.
Author: Chip Rossetti (Managing Editor, Library of Arabic Literature)
Latest developments in the digital sphere offered new opportunities and challenges to the humanists. Equipped with new digital methods of text analysis, scholars in various fields of humanities are now trying to make sense of huge corpora of literary and historical texts. Perhaps the most prominent of such attempts is the work of Franco Moretti and his abstract models for literary history that trace long-term patterns in English fiction. Inspired by Moretti’s approach, I seek to develop abstract models for the analysis of pre-modern Arabic historical literature, relying mainly on various textmining techniques that are being developed at the intersection of statistics, linguistics and computer science. At the moment, I concentrate primarily on biographical collections, a genre that includes several hundred multi-volume titles (The largest collection—al-Dhahabī’s “History of Islam”—covers 700 years and contains about 30,000 biographies). Working with the corpus of 10 biographical collections (about 125 printed volumes; 45,000 biographical accounts), I am developing an analytical tool that can be later used to study other biographical collections—ideally, all of them together. In the long run I hope that the results of my work will pave the way to the development of analytical tools for other genres of pre-modern Arabic literature such as chronicles, ḥadīth collections, interpretations of the Qur’ān, compendia of legal decisions, etc.
Working with my biographical collections I look primarily into such kinds of biographical data as “descriptive names” (nisbas), dates, toponyms, and, since recently, rather loosely defined linguistic formulae and wording patterns. The analysis of different combinations of these data allows one to trace various social, religious and cultural patterns in time and space. I am particularly interested in how the Islamic world changed over the period of 640–1300 CE: how cultural centers were shifting; how different social, professional and religious groups were replacing and displacing each other; how different regions were connected with each other and how these connections changed over time. The results of my analysis will be presented in the form of graphs and geographical maps (Some current examples of my work can be found at www.alraqmiyyat.org).
Author: Maxim Romanov (Univ. of Michigan)
I will discuss approaches to the digitization of Islamic books to explore its impact on Islamic and Middle East Studies, drawing on my research about the manuscript-print transition in Muslim societies within the context of technology transfer across Eurasia.
The digitization of books is widely accepted, because the digital processing of written language is merely the latest technology used for the display and storage of texts. E-books are on the verge of making the printed book an obsolete object, since for readers access to texts is all that matters. But the naturalization of the e-book is accompanied by the risk of diverting resources from the preservation of the material artifacts. Not every digital text has metadata which link the digital copy to its physical original whose whereabouts and provenance are known. Moreover, the long-time costs of digitization are rarely considered, even though the future functionality of digital surrogates, despite their immaterial appearance on our screens, depends on the continued investment into hardware and software, as well as into human labor.
The ubiquitous use of digitization by a wide range of institutions reflects that scholars, libraries, and grantmaking agencies, such as CLIR and the Imam Zayd Cultural Foundation, employ digitization for reasons quite different from those of commercial publishers. In North America the digitization of Islamic books is used to facilitate access to rare texts (e.g., Caro Minasian Collection, Hathi Trust Digital Library), to preserve endangered cultural heritage (e.g., Afghanistan Digital Library, Yemini Manuscript Digitization Initiative), or to allow for the crowdsourcing of uncataloged manuscripts (e.g., Collaboration in Cataloging). I will argue that in Islamic and Middle East Studies digitization receives little critical attention, because the access to a rare text is valued more highly than the historical interpretation of a specific book as material evidence for the transmission of knowledge.
Author: Dagmar Riedel (Columbia University)
Digitally-enabled spatial analysis can generate hypotheses, substantiate arguments, and communicate findings at a glance. In this paper, I will demonstrate how spatial analysis reveals the topography of readership in seventeenth-century Istanbul. Using WorldMap has allowed me to collate data from many different sources, including court records, probate inventories, and waqfiyyas, into a single map in order to identify larger patterns. Given the exploratory nature of the conference, I will briefly share the “messy” interim steps I took as well as the more polished maps that resulted. During the remainder of my talk, I will reflect on the promises and limitations of open-source, user-contributed mapping. WorldMap allows anyone to create a layer which can be combined with other users’ layers. In other words, it holds the potential to facilitate the kind of collaborative work that is said to be a hallmark of the digital humanities. At the same time, my experience as a consultant to digital humanities projects (while working for Ithaka before graduate school), provides some cautionary tales about the sustainability of digital projects and challenges presented by “user-generated” content.
Author: Meredith Quinn (Harvard University)
Recent developments in digital humanities pose anew the challenge of sources, concepts, and possibilities for doing history differently. Much of the current debates have been focused on the vices and virtues of the quantity of (in addition to the ease of access to) the archives that digitization has made available to historians; on whether methods of quantitative social research could now be meaningfully employed by historians (and scholars of the humanities more generally). Based on a digital archive project,Women’s Worlds in Qajar Iran, started in 2009, this paper will probe the possibilities for doing different kinds of cultural and social history of nineteenth-century Iran, enabled by accessibility of a multi-genre archive. What happens to/in history if we could persistently read textual documents, visual material, objects of everyday life, recorded memories, etc. in relation to and through each other’s meaning-making work?
Author: Afsaneh Najmabadi (Harvard University)
How can sonic phenomena best be represented in academic discourse? While the question has long preoccupied music studies, recent developments in technologies of sound recording and dissemination have given rise to new possibilities for the inscription of knowledge through digital sound recordings. With the growing discipline of sound studies, the role of knowing-through-sound has moved from periphery to center, challenging the ocularcentric tropes of modernity in favor of a more multivalent notion of sensation and knowledge. The study of Islam has mirrored this development as well, as scholars have turned their attention to the rich acoustics of Islamic practice, underscoring the aurality of the Qur’an, devotional practices (salah, zikr, etc.), Islamic architecture and even a more general sense of a Muslim (counter)public.
In this paper and audio presentation, I explore the sonic idiosyncrasies of Islam in Berlin, especially in the context of migration from Turkey in the past half century, through the process of audio recording. Through sound recordings I have made in the past two years, I argue that Berlin’s various Muslim communities (specifically Caferi Shi’ites, Halveti-Cerrahi Sufis, Alevis, and Hanefi Sunnis) articulate significant differences between one another through sound. This sense of heterophony—a simultaneous sounding of difference and oneness—is critical to the formation of an acoustics of Islam, especially in contexts of migration and transnationalism, as groups that previously inhabited distant geographical homelands are now placed in close proximity. While the sounding of these differences can be detailed verbally—that is, inscribed with words—they are in fact natively acoustic arguments, represented more directly and emphatically through sound itself.
Author: Peter McMurray (Harvard University)
The private manuscript libraries of Yemen comprise one of the world’s largest and most important collections of Arabic manuscripts. Collectively, these 6,000 private libraries possess some 60,000 codices, many of which are unique. But this irreplaceable trove of manuscripts is threatened. In recent years, Yemen’s private libraries have suffered great losses, in part due to extremists who are ideologically opposed to the Zaydi Shiite school of Islam and have targeted Zaydi manuscripts for destruction. In the past ten years, over 10,000 manuscripts, including several entire libraries, have been destroyed.
This paper describes the efforts of The Imam Zaid ben ‘Ali Cultural Foundation (IZbACF), a non-profit, non-governmental organization devoted to digitally preserving this collection. Their efforts have recently been fortified by The Yemeni Manuscript Digitization Initiative (ymdi.uoregon.edu), a collective of Middle East librarians and leading scholars of classical Islam, Middle Eastern history, and Arabic Literature from North America, Europe, and the Middle East. In September, 2010, YMDI’s partner institutions Princeton University Library and Free University, Berlin secured a $330k Enriching Digital Collections Grant from the National Endowment for the Humanities (NEH) and the Deutsche Forschungsgemeinschaft (DFG). The goal of the NEH/DFG grant has been to create an infrastructure through which manuscripts in private libraries in Yemen are digitally preserved and made widely available through Princeton University’s Digital Library. This paper presents YMDI’s progress and prospects.
Author: David Hollenberg (University of Oregon)
This presentation will concern an NEH funded project I’m heading to develop a tool called Prosop. Prosop’s first aim is to assemble a database of descriptions of a very large number of historical individuals, of inferior socio-economnic rank to those who feature in most prosopographic projects. The tool is meant to preserve such information in its native format, without any fixed category requirements. It will then find connections within a very large pool of demographic data, and allow aggregate analysis. Ultimately, Prosop aims to make the various historical description and categorization schemes themselves the subject of research.,
Prosop is intended to help three kinds of users: microhistorians who have completed research projects and want to preserve their data, the collection of which cost them their eyesight and at least one marriage; microhistorians doing new work, who want to collect material in a format more useable than a word processor document or spreadsheet; and family historians, who are currently doing tremendous “crowd-sourcing” style work with primary documents, work that is being captured by for-profit sites such as Ancestry.com but passed over by professional historians.
This presentation treats a methodological issue: the techniques that we use to deal with the tremendous volume of data generated by Middle East microhistories. I will describe my sense of the challenges and potentialities of this aspect of our work, and discuss ways that I think Prosop can support collaborative historical work. I will explain, in fairly general terms, the features of the data structure of Prosop and its most innovative aspect, which is on-the-fly ontology. I will invite participants to describe the technical characteristics of their own work, their needs, and the solutions they imagine. I will focus this conversation around the ongoing design of Prosop, but my aim is to facilitate a conversation that treats the broadest issues raised by technology-assisted prosopography.
Author: Will Hanley (Florida State University)
This paper elaborates the tools used for mapping the density of news reports onto the city of Damascus and its hinterlands, which were developed in the context of my doctoral research titled “To whom belong the streets? Property, propriety, and appropriation: The production of public places and public spaces in late Ottoman Damascus”. The paper discusses the various technologies and their implications for the analysis of the results produced, against the backdrop of a case study on violent incidents.
The research is based on c. 7.000 news reports on events in Damascus between 1875–1914, drawn from the newspapers Lisān al-Ḥāl, al-Bashīr, Thamarāt al-Funūn, Ḥadīqat al-Akhbār, Suriye, and al-Muqtabas. The reports were excerpted, partially transcribed, coded for topic and locations, and stored in a relational database.
In order to run a named entity recognition for locations on the database of sources, I produced a file of geocoded place names from written sources, old maps, secondary literature, and geographic authority files. The named entity recognition then counts the number articles on a specific topic or containing a particular catch phrase for each known location. This is done running the data as XML files through XSLT transformations, producing a JSON data stream. The results are then mapped using the SIMILE Exhibit 3 framework , which provides customisable visualisations of the data stream on a base layer of maps to be displayed in any browser.
This approach allows to a) cope with a very large body of short written sources, to b) immediately recognise correlations of certain topics with specific locations, and to c) evaluate sources as to their biases.
Applied to the discourse on violence in late Ottoman times, we see a discourse on legitimate rule that produces a spatial dichotomy of the urban, civilised, peaceful centre and the pre-dominantly non-urban, backward, and violent populations of the peripheries. Armed Bedouins and Druze of the South become the epitome of the vicious (semi-)nomadic bandits threatening the urban “flock” under the protection of the Ottoman authorities. This dichotomy and the positive correlation between violence and distance from the seat of the government is also found in the topography of Damascus itself.
- For this purpose, I used an off-the-shelf reference managing software, which stores all data in an SQLite database; www.thirdstreetsoftware.com, www.sqlite.org. ↩
- The most important authority file was the GeoNames database, which is licensed as CC BY and can be downloaded or queried. As I use the reference file of geocoded place names for various other projects, it adheres to the encoding standards of the TEI P5. ↩
- I made examples available for violent incidents and riots. ↩
The Analytical Database of Arabic Poetry will represent an important contribution to the emerging field of digital studies in Arabic philology. The database will include comprehensive data on the vocabulary of early Arabic poetry (6th-8th centuries A.D.) in the form of an electronic dictionary. With the help of the analytical tools of the database, each lexeme of the entire lexical corpus will be assessed in relation to the literary framework of its attestation including information on the genre of the relevant poetic text and on the tribal, chronological and geographical background of its author. Moreover, the database will record in detail the data of textual transmission of the works of early Arabic poetry in the context of Arab-Muslim scholarship of the 8th to 10th centuries. This comprehensive collection of data and its analytical classification will for the first time allow systematic investigation into the process of semantic change in the Arabic language and the development of a philological approach to the language.
A ground-breaking feature of the database results from the possibility of including cross-references to parallel linguistic material provided by inscriptions, papyri and the Qur’ān, which have never been studied in relation to each other. Thus, the Analytical Database of Arabic Poetry promises to become the cornerstone of the common digital platform of the Arabic language, which will bring together several current European projects in the field of digital Arabic philology, including the two ERC funded projects “Glossarium graeco-arabicum” (ERC Ideas Advanced Grant 249431, Cristina D’Ancona, Università di Pisa, Italy, Gerhard Endress, Universität Bochum, Germany) and “Digital Archive for the Study of pre-Islamic Arabian Inscriptions (DASI)” (ERC-AG-SH5 ERC Advanced Grant 269774-DASI, Alessandra Avanzini, Università di Pisa, Italy) as well as other initiatives, such as the “Safaitic Database Project” (Michael C. A. Macdonald, Oxford University, UK), the “South Arabian Lexicographical Database of the University of Jena” (Peter Stein, Jena, Germany), the “Arabic Papyrology Database” (Andreas Kaplony, LMU München, Germany, Johannes Thomann, Universität Zürich, CH), the “Corpus Coranicum” project (Michael Marx, BBAW, Berlin, Germany) and “Arabic and Latin Glossary” (Dag Nikolaus Hasse, Universität Würzburg, Germany).
Author: Kirill Dmitriev (St. Andrews)
In recent years growing attention has been paid to the circulation of texts and to various textual practices throughout the Islamic world in general and the Ottoman Empire in particular. Most studies, however, were qualitative in nature. My paper seeks to demonstrate the advantages of digital humanities for the study of circulation of manuscripts and the ways in which they were used and consulted. To illustrate the advantages of digital humanities the paper takes as its case study the circulation of legal texts across the Ottoman Empire. More specifically, the case study is based on a comparison of two fatawa collections from the mid seventeenth century that I have digitized for my research: the fatawa collection by the chief imperial mufti, şeyhülislam Minkarizade Efendi (1609-1677 or 8), and the famous Palestinian mufti, Khayr al-Din al-Ramli (1585-1671). By focusing on the special features of each of the fatawa collections, I hope to draw attention to the advantages these databases and the digital humanities more broadly offer for this kind of study on the one hand, and to raise attention to what the databases conceal on the other. Finally, through this case study, the paper intends to discuss how this methodology can be applied to the study of texts and their circulation in other contexts and time periods in Islamic history.
Author: Guy Burak (Bobst Library, New York University)
This paper will present the conclusions of an interdisciplinary seminar focused on a Seljuq qu’ran from Hamadan, Iran. The manuscript, shelfmark N.E.-P. 27, is dated to 1164 CE by a colophon, and is now held by the University of Pennsylvania Museum of Archaeology & Anthropology. Students with backgrounds in Near Eastern Languages and Culture and Art History collaborated to investigate the production of the book – one of the few complete Qur’an manuscripts dated to this period. Individual students produced focused studies of different features of the book. The main text of the manuscript was probably written by a single calligrapher, although several campaigns of textual corrections are scattered throughout the book. The verse markers and sura headings, however, are the product of at least four different illuminators. The book was significantly repaired sometime in the eighteenth century, perhaps when it was donated as waqf by Amir Ahmad Jawish (d. 1786) to the al-Azhar Mosque in Cairo.
Digital tools played a large role in the art historical analysis of the book. In addition to digitizing the manuscript, targeted pigments were analyzed using portable XRF analysis. Digital photos of the manuscript were enhanced to reveal the complex construction of the frontispiece. Computer vision algorithms, especially scale-invariant feature transform (SIFT) were explored but rejected. Principal component analysis (PCA) and other statistical techniques like co-occurrence analysis proved extremely useful for revealing trends in the complex and varied sura heading decoration. The question for art historians, however, is what exactly these trends represent: individual artists, formulaic models, or some combination of the two? While similar techniques are widely used by archaeologists for the analysis of geochemical data in ceramics, their deployment in art historical contexts is still developing and raises a number of methodological questions that this paper will explore.
Author: Alex Brey (Bryn Mawr)