Creating Styleframes with LeonardoAI and InvokeAI
I decided to take screenshots of one of my favourite animated movies, The Road to El Dorado, and create some live action styleframes out of them.
Process:
To get an initial image out, I'm using Leonardo AI. It's workflow helps me quickly establish the style and composition I'm looking for. In this case, I have a screenshot to match to:
Quick definition before continuing:
Control Nets
These are AI tools that guide image generation by providing additional structural input, such as sketches, depth maps, or edge detection. They help maintain consistency in composition, pose, or perspective while still allowing for creative freedom in generating an output image.
Since I want to match my image as closely as possible to the screenshot, I'm using a combination of ControlNets with the screenshot as my input image.
The Edge to image ensures my generated images follow the specific structure.
The Depth to image acts as my depth map.
Image Prompt Adapter
IP Adapters extract specific visual elements from reference images and apply them to new AI-generated artwork. It's an add-on that allows you to use images as prompts.
The Style to image uses the colours and textures from my input image.
Each control has its own strength and finding the right value for each while having them all work together is key to obtaining the look you're after.
My output image:
I'm taking this image into InvokeAI, where I can mask the specific region I want to work on.
Note: The denoising strength in Invoke AI drives how much you want your output image to deviate from your input image.
Lower value = Closer to the image, for subtle changes.
Higher value = Closer to your prompts/inputs, more noise added, more creative freedom for changes to your image.
My final output image:
Here are some more screenshots I turned to styleframes using the leonardo AI and Invoke AI workflow.
Chel character:
Comments
Post a Comment