Category Archives: Oulipo

Anaglyphic Text

In the Oulipo’s La littérature potentielle (Gallimard, 1973), François Le Lionnais brainstorms several ideas for new literary forms that would depend on computer technology. One of these forms is what he calls anaglyphic text:

Literary texts are always planar (and even linear, generally speaking): that is, they can be represented on a sheet of paper. A text could be composed whose lines were situated in a three-dimensional space. Reading it would require special glasses (one red lens and one green) using the anaglyphic method that has already been used to represent geometric figures and figurative scenes in space.

One will notice an attempt at orthogonalization within the plane, in the acrostics. (34)

By “acrostics” I think Le Lionnais means that one can read not only within a traditional two-dimensional plane but also depth-wise, focusing on elements in the same syntactic position on different planes and observing spatially how the elements differ semantically.

I have programmed two examples of anaglyphic text based on early examples of Oulipian writing (you will need the special glasses for the full effect). The first is an interactive version of Raymond Queneau’s Cent mille milliards de poèmes. In this version, the reader selects one of the ten options for each verse in a sonnet and the selected verses advance toward the reader while the other verses recede. The reader can see all the verses at once, generating a particular instance of a sonnet while keeping all options for each verse within sight.

Because of copyright considerations, I must refrain from sharing the full anaglyphic version of the Cent mille milliards de poèmes. Below is a screen shot to give you an idea of how it works.

The second is a version of N + 7, whereby one takes a text and replaces every noun with the seventh noun that follows it in a given dictionary. The procedure can be generalized to W ± n, where W is any part of speech (noun, verb, adjective, adverb, …) and n is any integer. The Oulipo’s first examples of N + 7 were produced “by hand” with printed dictionaries, but the procedure clearly lends itself to computation where the writer can easily look up words and experiment with different source texts, dictionaries, and values for n. The first instance of a program for N + 7 was written by Dimitry Starynkevitch on a mainframe computer in 1963, when computers were relatively rare and expensive to use (Bens, 199). The web application below combines W ± n with anaglyphs as a viewing option. The dictionaries are sorted word lists extracted from the Brown and Gutenberg corpora (containing respectively 38,879 and 33,924 distinct lemmas) included with the Natural Language Toolkit, and the tools for parsing source text, conjugating lemmatized verbs, and performing other linguistic tasks come from the pattern Python module.

If you see notice of a server error, try running the program in a separate window (current browsers do not like third-party cookies when displaying embedded content with an iframe).

The anaglyphic version of N + 7 allows one to experiment with different source texts, dictionaries, and values of n and see ten variations of a source text simultaneously (with some scrolling), reading both planarly and in depth.

The source files for both these web applications are available here.

Works Cited

Bens, Jacques. Genèse de l’Oulipo 1960-1963, La Castor Astral, 2005.

Le Lionnais, François. “Idea Box,” trans. Daniel Levin Becker. All That Is Evident Is Suspect: Readings from the Oulipo 1963-2018, ed. Ian Monk and Daniel Levin Becker, McSweeney’s, 2018, pp. 34-39.

Queneau, Raymond. Cent mille milliards de poèmes. Gallimard, 1961.

From Elocutio to Inventio with Vector Space Models of Words

A cloud representing a vector space model of words from over 1,300 French texts published between 1800 and 1899.

In his 1966 essay “Rhétorique et enseignement,” Gérard Genette observes that literary studies did not always emphasize the reading of texts. Before the end of the nineteenth century, the study of literature revolved around the art of writing. Texts were not objects to interpret but models to imitate. The study of literature emphasized elocutio, or style and the arrangement of words. With the rise of literary history, academic reading approached texts as objects to be explained. Students learned to read in order to write essays (dissertations) where they analyzed texts according to prescribed methods. This new way of studying literature stressed dispositio, or the organization of ideas.

Recent developments in information technology have challenged these paradigms for reading literature. Digital tools and resources allow for the study of large collections of texts using quantitative methods. Various computational methods of distant as well as close reading facilitate investigations into fundamental questions of the possibilities for literary creation. Technology has the potential for exploring inventio, or the finding of ideas that can be expressed through writing.

The Word Vector Text Modulator is an attempt to test if technology can foster inventio as a mode of reading. It is a Python script that makes use of vector space models of vocabularies mapped from a corpus of over 1,300 nineteenth-century documents in order to transform a text semantically according to how language was used within the corpus. An experiment such as this explores the potentiality of language as members of the Oulipo have done with techniques such as Jean Lescure’s S+7 method, Marcel Bénabou’s aphorism formulas and the ALAMO’s rimbaudelaire poems. With technology we can investigate not only how something was written and why it was written, but also what was possible to write given an historical linguistic context.

Oulipian Code

Aphorismes de Mark Wolff

These aphorisms were generated with code developed by the Oulipo.

In the Atlas de littérature potentielle (1981, rev. 1988) the Oulipo mentions a number of experiments with computers as tools for exploring algorithmic constraints on writing. One example is the complete text of a computer program written by Paul Braffort that generates aphorisms (311-315). Today such programs are textbook exercises for learning computer languages, but Braffort wrote the program for a mainframe in the 1970s using the language APL (A Programming Language). Developed by Kenneth Iverson at IBM in the 1960s, APL is one of the earliest computer languages (after Fortran and Algol) designed to manipulate data as matrices. Although it is still in use by some programmers working in financial analysis, APL today is a fairly obscure language for which there are few compilers and interpreters.

In the 1981 edition of the Atlas Braffort extols the virtues of APL not only as a system of notations for formalizing literary structures but also as code that executes complex algorithms (113). Although he claims his computer program provides “a thoroughly complete analysis of the procedures used” to generate aphorisms, it needs to be executed in order to test the analysis and observe how the algorithms work. To this end I have transcribed the code published in the Atlas so that an APL interpreter can compile and execute it. The code is comprised of specific functions and pre-loaded variables. To run the code, you need to )LOAD this file into an APL interpreter such as APLX (there are other interpreters out there but APLX is the only one I have successfully installed in OSX and Ubuntu). At the prompt enter your name and the code will deliver an aphorism for each character you type (including the space between your first and last names).

If you manage to get the code to run, you may wish to understand how it works. For that I recommend APLX’s online tutorial.