On November 4, 2021, the Qualcomm Institute (QI) at UC San Diego premiered a large-scale, solo art exhibition of cultural analytics of social media collected on January 6 from the alt-right, conservative social media platform Parler and the broader network of Twitter, Capital Glitch: Arab Cyborg Turns to D.C. Reflecting on the events leading up to and taking place during the January 6, 2021 attack on the U.S. Capitol, the three-act installation consisted of interactive mosaics, glitch metal prints, and a mixed reality immersive experience. Gallery participants could populate Capital Glitch’s 30-foot mosaic by choosing among topics analyzed in the images, using their phones as game controllers. Viewers could choose among a dozen images of Arab cyborgs as their avatar to navigate an archive of images and memes sourced from Parler rendered into “day” and “night” mosaics of the U.S. Capitol building. The “day” version of the Capitol mosaic derives from a set of Parler social media and the “night” version from a set of Twitter data posted on January 6, 2021. Three dye-sublimation on metal prints from the Arab revolutions of 2011 hang in the liminal space between the wall of mosaics and the gallery experience. While “Birds,” “Cairo Graffiti” and “Tapestry” are artistically a pre-history of the January 6 moment, they are also thematically related to this new body of work.
Outside the gallery, viewers saw consequences of U.S. foreign policy. Similar forces play out in the domestic sphere within the gallery’s interior. Using mixed-reality smart glasses to experience the augmented reality “Insurrection Video Procession,” participants could gesture inside holograms to interact with an archive of videos that populate a map around the U.S. Capitol building and words that connect the dots among users within online conversations. This augmented reality data visualization of over five hundred videos posted onto Parler on January 6, 2021, from the stage where Donald Trump gave his “Stop the Steal” speech to the U.S. Capitol building where the attempted coup took place. These videos were recreated in a table-sized miniature recreation of that location. Wearing a headset, viewers could return to the scene and choose among the videos that were uploaded in synchronicity with others on the same network within proximity to watch. Instead of selecting filter options on a website to visually make sense of data, in this 3D augmented reality, users can still filter through video selections, but through an embodied experience of the context in which these media were captured and shared.
Offering glitches, mosaics, and mixed realities, VJ Um Amel (and I) revealed a secret world of code in an abstract and algorithmic aesthetic. The singularity of each image stands for an inﬁnite number of visual memories, some recorded, most not. The use of the mosaic mode of “assemblage” is intended to capture this notion of the inﬁnite, reiterative algorithmic form of any single visual expression. It attends to the text within the archive and approaches the archive as a text. These disenchanted yet synchronized voices warn us of the dangers of the radical right and its data embodiment. Their assemblage revealed democracy and authoritarianism as a spectrum, not unlike the potentialities of data embodiment itself, located on a range of political imaginaries and possibilities.
The opening night gala included a presentation from VJ Um Amel as well as a panel discussion featuring VJ Um Amel, Jordan Crandall, Ricardo Dominguez, Ronak Etemadpour, and was moderated by Amy Alexander.
Visit “virtual gallery” at vj.live to experience a website version of the exhibition, which include a few extra machine learning data visualizations.