This article was written by Jono Brandel, new media artist and team lead for Writing with Open Access. It is the first in a series of Design Retrospectives on the prototypes commissioned by Cooper Hewitt’s Interaction Lab as part of Activating Smithsonian Open Access in Spring of 2021.
It was a series of interviews and conversations with museum practitioners that sparked the concept for Writing with Open Access. As part of my participation in the NEW INC “Cultural Futures” track, I learned from curators, exhibition designers, administrators, and docents that collection-based museums often have defined workflows to develop exhibitions. For the purpose of this project I focused on the curator’s workflow. There are multiple reasons that an exhibition may evolve at any particular cultural institution. In this instance, I will describe the process as I have observed. It starts with the curator’s interest in a particular topic, typically based on academic research. A written statement, usually including exhibition themes, is then drafted. These themes serve as an outline for the exhibition “checklist”, which is a growing and evolving list of objects to be presented in the exhibition. The thematic groups of objects are meant to illustrate the central thesis of the exhibition.
The process of relating objects and images to ideas is closely related to my experience developing systems that produce compelling imagery as a new media artist. Take the image below, for example. It is a visual representation of Wassily Kandinsky’s book, Concerning the Spiritual in Art. The system that produced this image analyzed the sentiment of each sentence in the book into a colored circle, offering a simple way to visualize long texts in a single glance. This system isn’t only applicable to Kandinsky’s book. It can also ingest and analyze other texts, using the data to create a vast artistic world to explore. Applying this kind of thinking to the curatorial process led me to imagine how to create an artistic world out of the Smithsonian’s vast collection of objects where words and objects could merge in surprising ways. So, I proposed an application for the Activating Smithsonian Open Access open call that would automate this process: write a statement and the application would respond with images for you to produce a virtual exhibition. While not exactly the final experience we built, this initial idea guided our team to our mission: bring the spirit of the museum to you through writing.
Planning the development process
In thinking through how to move from concept to working prototype, we had to consider three constraints: we are a small team of three; the ASOA build period was a brisk ten weeks; and our proposal was ambitious. On top of this we are spread across the globe: Sunny (Designer) in Seoul, Hiro (Creative Technologist) in London, and myself in New York. So we set up recurring weekly meetings to go over everything anyone had worked on that week. To start, I shared my takeaways from the conversations I had with current and former museum employees. We then set up an Arena board to explore the creative possibility space. Sunny used this information to choose the typeface and color palettes. She designed the layouts in Figma, a collaborative interface design tool. Once we had an idea of the form an experience this could take, we focused our efforts on the technical challenges in delivering it.
We first dissected our concept into technical units. At its core, Writing with Open Access hinges on three technical transactions. First, the application needs to identify keywords in your writing. Then it needs to use those keywords to search Open Access, and finally respond to those keywords with images of corresponding objects. In the development process, we needed to test that each of these transactions worked independently before wiring them together sequentially to provide the end experience you get in the prototype. We forecasted one week to test and make a prototype for each transaction. By the fifth week, or halfway through the build period, we would have a rudimentary prototype that at least allowed you to write text and have images returned. We were unsure if this was even feasible, so it was important to fail as soon as possible in order to give us time to adapt. If everything went as planned we would spend the final five weeks refining the user interface, ensuring the experience could work on various devices (mobile and desktop) and that it could handle many people using it at the same time. Lastly, Open Access provides information about its collections through an API, or “application programming interface”, which allows our project to speak to and retrieve information about the Smithsonian’s collection in a structured way. For this prototype to be successful we needed to comply with the structure of their API. For instance, every object has dimensions and our codebase needed to correctly identify these properties in the data received from the Smithsonian.
Open Access is a vast resource, but we required other resources to layout images and analyze users’ writing. Given these circumstances, Hiro and I explored what technologies we could rely on in order to realize the prototype. Like cooking, we were looking for additional ingredients to mix with Open Access. We looked at in-browser solutions like Rita and recently published tools from OpenAI. On account of familiarity and function we decided to use Google Cloud’s Natural Language Programming API to break down writing into keywords. For each of the three transactions, we set up a unique URL that our prototype could query to receive information. For example, one URL accepts a sentence of text and returns the nouns andverbs, while the second fetches associated images, and a third adds metadata like the dimensions to the images. Once each transaction was working and in place we were then able to develop the user interface in an HTML5 website. The interface is built with React.js, which manages the data between each transaction and the user-interface. The image to the right reflects our thinking at the time of the proposal.
Building with clear direction and without a specific destination
In addition to our team’s weekly meetings, I met with mentors organized by Cooper Hewitt every other week. Receiving diverse opinions at various stages of development was instrumental in giving us clear priorities to address for the upcoming week. As we prototyped each individual transaction, we received unexpected feedback from mentors. Interaction Lab Director, Rachel Ginsberg, planted the seed for us to explore writing in multiple languages. Inclusive design consultants Sina Bahram and Corey Timpson informed us of easy-to-implement HTML properties and elements that made our interface usable with screen readers. Ryan King, Open Access Program Manager, and Andrew Gunther, Lead Application Developer, API, Stack and Application Architect/Engineer, gave generous advice regarding how best to leverage the Open Access API. They were also gracious and enthusiastic collaborators open to receive our feedback. Because we had developed each transaction separately, integrating these suggested features was straightforward. For instance, when running the Natural Language Programming API, we simply added a Google Translate step allowing us to expand users’ writing from just English to ten languages in total. For image descriptions, the application first looks for a label from the Smithsonian. If the label does not exist, we use Microsoft Azure’s Image Caption service so that the content can be meaningful for users of screen readers. One element setting Open Access apart from other image search engines like Google or Bing is the metadata attached to each object. Every object in the collection has dimensions in centimeters. We use this information to size images relative to each other making a more true-to-life composition similar to how the actual objects sit in the museum. In looking back at the development period, the plan we had proposed gave us structure to consistently make progress towards our prototype, while also being open to new ideas. To our satisfaction the ten-week build period was incredibly productive!
Epiphanies and next steps
This process brought us to the Writing with Open Access prototype you can experience today. It has been delightful to the team to see that writing of all kinds, not just introductory statements for exhibitions, yields different kinds of object results. Victor Hugo’s poetic description of dawn delivers images of landscapes, while a famous passage from Romeo and Juliet shows portraits, and the lyrics to ‘Watermelon Sugar High’ by Harry Styles, not surprisingly, shows images of fruit. While inspired by the museum’s curatorial process, we understand now that this experience is really a flexible writing tool that can help people work through ideas and generate new ones with the help of supporting images.
We are proud of what we have built thus far and we’re interested in opportunities to apply this application to more museum collections, datasets, and to find even more use cases. If you are part of an organization or institution and may find this useful for your audience, please get in touch: firstname.lastname@example.org.
About the Author
Jono Brandel is a new media artist based in New York City. With a Bachelor’s in Design | Media Arts, a Minor in Latin from the University of California, Los Angeles, and a Master of Fine Arts in New Media from Paris College of Art, Jono’s work has been shown at international festivals including TED, Sundance, Tribeca Film Festival, SIGGRAPH, and the Japan Media Arts Festival. His commitment to Visual Music has sparked collaborations with musicians including applications with Lullatone, a live performance with Nosaj Thing, and interactive music videos for the likes of Matthew Dear, the Chemical Brothers, and Kimbra. He is currently a member of NEW INC, the New Museum’s art, design, and technology incubator.
About Activating Smithsonian Open Access (ASOA)
Created by Cooper Hewitt’s Interaction Lab and made possible by Verizon 5G Labs, Activating Smithsonian Open Access fosters a new approach to activating museum collections by expanding access to deep engagement for people of many abilities and interests worldwide, and supporting creative technology teams in the process. Each team received $10,000 to build a functioning prototype of a new digital interaction that enables play and discovery with 2D and 3D digitized assets from the Smithsonian’s Open Access collections and will retain ownership of all intellectual property developed from the program.
About Cooper Hewitt’s Interaction Lab
The Interaction Lab is an embedded R&D program driving the reimagining of Cooper Hewitt’s audience experience, across digital, physical, and human interactions. Since its Fall 2019 launch, the Lab has injected new ideas into the museum’s work through internal workshopping and strategy, a highly participatory public program series merging interactive design and museum practice, and a commissioning program that engages the design community as creative collaborators in creating the next wave of the Cooper Hewitt experience.