AI research project
In 2024 I received funding from Arts Council England through their Develop Your Creative Practice scheme to research AI image, animation and video creation techniques. The videos on this page are pieces of artist film, music videos and sketch ideas I have developed during the research project.
Throughout the project I was keen to use open source software where possible and so the work here has all been developed using stable diffusion, mainly with use of comfyUI. In addition to comfyUI I have also used a number of video and audio tools such as Hailuo, eleven labs and Krea.
Authentic Rambling Man
The focus of this piece was to look at current video generation tools and explore their ability to mimic the aesthetics of archival film footage. The project uses comfyUI to create still images using text to image. Those images are then run through the image to video generator in Hailuo AI. The audio was created using AI music.
Stable Visions
Screened at Norwich University of the Arts, this installation was created in comfyUI with elements of CG. The AI elements used animatediff and series of loras from CivitAI to create effects such as old film and seamless panoramic images. The installation was
Tu Casa es Mi Casa
I was interested to see if it was possible to create consistent characters using a combination of comfyUI and Hailuo. To do this I created a series of loras in FluxGym in Pinokio that were then used in a flux workflow in comfyUI. The resulting stills were then run through Hailuo.
Physically
Throughout the project I become increasingly interested in using images set to audio in a style reminiscent of slide shows. This piece uses a non-AI recording and additional sound design set to images in the style of macro photography created using comfyUI. My intention is to further develop this piece using non-AI methods.
Ascending
The music video uses AI generated content with CG. Although predominantly CG around 25% of the content is AI. The dream sequence is perfectly suited to AI content which sits on a transparent screen within the CG environment. AI is used towards the end of the video for special effects shots that would have taken days to produce using traditional methods. The project used animatediff in comfyUI and text to video in Hailuo AI.
The Many Voices of Liam
Receiving the funding has been an incredible opportunity for me to develop AI techniques, however, I did feel somewhat daunted as where to start with developing AI techniques. It felt there was such a vast area to look at and I really didn’t want to squander the opportunity. So to begin I made something very quickly, in one day, using elevenlabs voice to voice alongside text to image and animatediff in comfyUI. The film is quite ridiculous but it did work in getting the project off the ground.
Void
Prior to the Arts Council research project I’d begun to explore AI techniques through a few films and music videos. I’ve included two of these pieces here, Void and Sticky Keys.
Void is an exploration of thresholds and the notion of the AI singularity. It is on at the Waterside Arts 25th January - 19 April 2024. The work was created using animatediff in comfyUI and CG techniques.
Sticky Keys
Music video created using deforum in Automatic1111.