gallery@calit2: Electric Fields
Electric Fields will premiere on June 12, 2019, at 6:00 p.m. in the Atkinson Hall - Calit2 Auditorium. The performance will include a panel discussion with John Burnett, Alexander Khalil, and host Katharina Rosenberger, and will be followed by a 7:00 p.m. reception. RSVP requested to firstname.lastname@example.org.
Following June 12, the exhibit will be on display in the Atkinson Hall gallery until June 21, 2019. Daily hours are 12:00 p.m. - 5:00 p.m.
Electric Fields, an immersive audio-visual installation, is the result of composer Katharina Rosenberger’s work as the Composer-in-Residence at the Qualcomm Institute, the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2). Electric Fields has been developed in collaboration with neuroscientist Alexander Khalil and media artist John Burnett. The work draws its materials and inspiration from experiments with neuronal feedback and sonification, novel methods of sound diffusion and projection mapping.
Katharina Rosenberger and Alexander Khalil have been exploring a number of different methods of converting brainwaves to sound, a process referred to as “sonification”. Brainwaves already feature many of the characteristics of soundwaves and, once recorded and stored digitally, can readily be rendered as sound. Many of the most easily recorded brain activity, however—and most of the brainwaves that are studied scientifically—are of frequencies below the range of human hearing. Through a process of speeding up and then stretching these oscillations, Khalil and Rosenberger have been able to make them audible, revealing a rich and nuanced sonic world.
In its final presentation, Electric Fields transforms the gallery into an abstracted multidimensional neuronal network. The audience encounters a mesh of suspended interlaced fabrics, masking the room’s actual dimension and immersing the viewer in the work. The video material consists of neural imagery, filigree and organically shaped drawings, and pointillist textures that are processed by the EEG data collected from Rosenberger’s and Khalil’s listening research. The sound diffusion consists of a multichannel audio system combining both near and far field sonification. The sound texture generated from the study is projected through far field spatialization; the beam technology however, picks up specifically on the auditory stream segregation. In the correct listening position, the audience perceives sound objects swirling closely around their head that are directly pulled from the general sound texture that is heard simultaneously.
Katharina Rosenberger is Professor of Music at the University of California San Diego (UCSD) and holds a Doctor of Musical Arts from Columbia University in New York. Her artistic and scholarly work uses interdisciplinary contexts and confronts traditional performance practices in terms of how sound is produced, seen and heard. In 2007, Rosenberger conducted a year of research at the Centre National Création Musicale (GMEM) in Marseille, France with a Reidhall Fellowship and a Camargo Foundation Residency Grant. Rosenberger is a recipient of the 2019 Guggenheim Fellowship. In the past she has been awarded the Hellman Fellowship, San Francisco, the Sony Scholar Award, and a Ernst von Siemens Musikstiftung Commisision. Her interactive installations ROOM V (2007) and VIVA VOCE (2013) were awarded the Sitemapping/Mediaprojects Award by the Federal Office of Culture in Bern, Switzerland. In 2016, Rosenberger began a three-year Composer in Residence position at the Qualcomm Institute (Qi), the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2). Rosenberger is an active composer and installation artist and shows her work at major festivals, predominantly in Europe, North America and Asia. Her music can be heard on Hat Hut Records/hat[now]ART, Grammont Musique Suisse/Migros Kulturprozent, Unit Records and Akenaton. She enjoys writing about art and education and was recently published in Musik & Aesthetik, a respected Klett-Cotta journal based in Stuttgart, Germany. She also serves on various review panels, such as panels at the International Computer Music Conferences (in 2009, 2011, 2012 and 2013) and California Electronic Music Exchange Concerts (in 2010 and 2011).
Alexander Khalil is a lecturer in music at University College Cork, Ireland, and a project scientist at the Institute for Neural Computation at UCSD. He is an ethnomusicologist and cognitive scientist specializing in music learning and transmission, musicality in human interaction, and the perception of time. Informed by his long experience as a chanter in the Greek Orthodox tradition, his ethnographic work investigates timing and temporality amongst chanters both at ancient centers of the tradition, such as Constantinople (present-day Istanbul), and in diaspora. Khalil's work in cognitive science connects the ability to synchronize—or co-process time—rhythmically with other cognitive skills such as attention behaviour and so links the practice of music to broader areas of cognition. Khalil received his doctorate in music at the University of California, San Diego in 2009. Upon completing his Ph.D., Khalil joined the department of Cognitive Science at UCSD as a postdoctoral scholar and fellow at the Temporal Dynamics of Learning Center, a National Science Foundation Science of Learning Center. In 2014 he joined the Institute for Neural Computation, also based at UCSD, as a project scientist where he has been developing methods for electroencephalographic (EEG) recording with multiple people in ecological environments.
John Burnett (b. 1993) is a multimedia artist based in San Diego, California. Drawing from a background in music composition, sound design, and technology, he creates technologically-augmented and reactive multimedia installation works, sound and projection design for dance and theater productions, as well as concert works and film scores. John is also a member of the Sonic Arts research team, based in the Qualcomm Institute at UC San Diego, where he researches audio spatialization and audio-visual technology. John is a graduate of Oberlin Conservatory and is currently pursuing a PhD in music at UC San Diego.