DEEP ALICE is an interactive installation in which the interactor is invited to participate in the construction of images that will constitute a machine’s “imaginary.” Through AI-based image generation processes, this machine transforms that imaginary into “machinic dreams.”
The machinic imaginary (a gradual collection of images/scenes created by the interactor and assimilated by the machine) is constructed as follows: the interactor composes scenes using paper figures (cut-outs from John Tenniel’s illustrations for the books “Alice’s Adventures in Wonderland” and “Through the Looking-Glass” by Lewis Carroll) by selecting and placing them on a surface. This surface, located on top of a table (table 1) and beneath a suspended camera (Camera 1), is green and contains loose green and red paper cut-outs. By selecting and positioning these elements (figures and colored cut-outs) beneath the suspended camera over the table, the interactor discovers that the generated composition/scene is projected onto the wall (video projection on screen 1). On the wall there are two projections: screen 1 displays the video image of the composition on table 1. Beside it, on screen 2, there is a mirrored image of the video from screen 1.
On both screens, the interactor can observe that the character/figure cut-outs “float” over dynamic textures which, in turn, are determined by the positions of the green and red cut-outs established by the user (through the Chroma Key technique, we map images of dynamic geometric patterns onto these cut-outs. The complexity of the textures increases over time: starting from surfaces with subtle color variations and evolving into geometric patterns that move gently). The video stream composed of character cut-outs and dynamic backgrounds is sent to a set of software systems that automatically produce AI-generated images, composing a synthesized video stream in real time. However, the result of this image-generation process is only revealed if someone (either the same or a new interactor) spins a spiral placed on table 2 (beside table 1) beneath a second suspended camera. When spun, the mirrored image of table 1 projected on screen 2 begins to dissipate, revealing a new video image (the image on screen 1 remains unchanged). This real-time generated video results from the transformation of the original video from table 1 by prompts (text messages sent to the AI software that influence or even determine the content of the generated images), which are automatically generated from text fragments of the two books.
The longer the interactor keeps spinning the spiral, the deeper the immersion into the machine’s “dream.” This dream, in turn, becomes increasingly complex and strange, gradually losing its initial relationship with real images (those prior to AI processing). This transformation in dream quality results from the action of prompts that become more complex over time: they begin as random trios of characters from the two books every 20 seconds, progress to randomly composed sentences formed from fragments of the books, and finally reach the inversion of each letter in the sentences generated by the previous method. The visual outcome of these strategies are images that initially maintain some resemblance between visual inputs (compositions generated on the first table) and prompts, and that gradually dismantle this similarity, generating hybrids between visual and textual inputs. This hybridization, observed through the differences between the original image on screen 1 and the synthesized image on screen 2 (placed side by side), is radicalized in the final stage, where the result (sentences with inverted letters) becomes indecipherable to the machine, forcing it to mobilize random images still anchored in the structures established by visual inputs and its trained image database (model). At this moment, when the machine produces images from the tension between its present and past, from the friction between an illogical external language and its internal logical structure, we can ask the same question Alice asks at the end of “Through the Looking-Glass”: “Who do you think dreamed it?”.
They are generated in English from Lewis Carroll’s original texts and instantly translated by AI into Portuguese (see the link below for technical details: DEEP ALICE: System Design and Implementation (deep-alice.blogspot.com). It is worth noting that both Carroll’s texts and John Tenniel’s illustrations have been in the public domain since 2003.



Nenhum comentário:
Postar um comentário