A media artist and creative coder, also known as Paul Henri. Currently based in Karlsruhe, Germany. After completing his master's degree in computer science, he began his activities at the ZKM | Center for Art and Media Karlsruhe. Since then he has been supporting the team of intelligent.museum among other things with his expertise in dealing with artificial intelligence and in developing exhibits. As an artist, he explores intermedial translation and multimodal interactions between humans and machines.
Contact: bethgeph at gmail dot com
Github: bytosaur
An artist persona of Paul Heinrich Bethge which recently became self-aware and split of from his creator. As work is no concept that Henri understands, his works are "funs". Those funs are centered around parties and the exploration of the human condition. He is mostly known for his saying: "French it up!". Lately he became enthusiastic about live coding as a performative act.
Contact: bethgeph at gmail dot com
Instagram: le_paul_henri
Latent Fieldsis a series of latent space explorations on a 2 dimensional plane. On both axis of the plane the study projects a trajectory through the two latent spaces of Stable Diffusion - the text encodings and image latents. Sampling along the axis leads to a grid of structural and contextual related images.
For the invitation cards of their biennial award giving ceremony the Berthold-Leibinger-Stiftung requested an AI generated artwork. As the award is given to innovations in the field of laser applications, a latent field was designed which covers the duality of photons. The two dimensional field is a grid of 28 x 36 related images. From left to right images of photons as particles are morphing into images of photons as waves solving the paradox as fictive illustrations. Each guest was given an individual image. Upon entering the event their card could be scanned to see the invitation card of the previous guest morphing into theirs.
Credits: Yasha Jain (Installation), Bernd Lintermann (Project Mentor)
Vereint(engl. combined) is an inofficial music video to the song Entzweit(engl. divided) by Hotel Morphila Orchester, a band which the former director of the ZKM Peter Weibel founded in the 70s. In the song Peter Weibel sings about how oneself is a mere collection of functional pairs that may be be divided forming two separated entities. In contrast to that the video has been co-produced by the combination of a human and an artificial intelligence. The content of the images is related to the song text while the movement in the video is depended on the drums of the audio. The production is fully automated and no postprocessing is needed. The video was premiered at ZKM's internal memorial ceremony for Peter Weibel.
COCOLOCOis an attempt of turning every-day objects into musical instruments. That is, each object class is associated to an instrument which can be controlled by moving the object in space. The work consists of two applications: A Python application detects certain objects on a camera image and sends class and position data over OSC. The Pure Data patch forwards the position data to various modifiers of the instrument given by the class.
Credits: Yasha Jain (Embedded YOLO acceleration)
LiveGAN is an audio-reactive application where ones auditive impulse is converted to faces in realtime. For this work a dataset called FLICK_KA has been used which consists of over 50k photo booth participants at ZKM. The dataset was used to train a DCGAN which was chosen to meet realtime requirements. Experiments with 256x256 @60hz and 512x512 @30hz have proven to be achievable using mid-level GPU acceleration. However, I failed to maintain image quality while upscaling. The work was shown several times as an example of using ofxTensorFlow2 as a way to combine creative coding and machine learning.
ofxTensorFlow2 is an openFrameworks addon for loading and running machine learning models trained with the TensorFlow2 library. The project started with the intend to run TensorFlow v1 models in openFrameworks as Memo Akten did in his ofxMSATensorFlow and was quickly adopted and extended by the openFrameworks community. The addon features common interfaces to the models and comes with a broad spectrum of examples and guides. The goal is to ease the execution of machine learning models which lowers the entry barrier and leaves more time to be spent on the creative part. As a developer / artist the model is treated as a black box, e.g. image in -> image out.
Credits: Dan Wilcox (openFrameworks guru)
Spoken Language Identification is a repository tackling the task of identifying the language spoken in a 5 second audio sample. The study uses a rather small model (7MB) to distinguish Noise, Chinese (main land), English, French, German, Italian, Spanish and Russian from each other with an accuracy of 85% on the Common Voice v5 dataset. At the time of development there was hardly any useable open source model. Today this task is solved by OpenAI's Whisper.
Live coding is a form of programming where the process of coding takes place right in front of the
audience.
Using algorithms, the artist create both musical and visual forms and patterns while fusing art and
technology.
Performances: