Posts

Contextual image editing models

Image
Lately, I’ve been integrating Nano Banana and Flux Kontext more into my matte painting workflow. These image editing models work great for a number of tasks. From extending environments seamlessly to making lighting and time of day changes.  I wanted to show a quick workflow of establishing a quick and solid base for a matte painting. As my plate, I'll use one of my photos from my library: A temple nestled in a forest canopy. I want to extend the temple downward, and have it sit on the edge of a cliff in the forest. Using Gemini's Nano Banana, I drafted a starting prompt. For a first pass, I'm happy with the temple extension, but I want the rocks to blend in with the forest more. This conversational approach makes it possible to split a task into steps if you want to focus on a specific step at a time. Rocks blended in with foliage more: Next, prompting for some clouds: Using Guide Lines As a next step, I want to extend the temple further, but specifically around the clif...

3D render refiner

Image
I have been running 3D renders through my GenAI setups and turning them into more photoreal images, while exploring how SDXL can refine lighting, textures, and mood. I start with my basic render: Here I'm using LeonardoAI and decided to turn the render into a street in autumn. I am using content reference to match the my output image to the render, and a style reference of a New York street in autumn (generated text to image for photorealistic quality). The final image: I wanted to try different styles and change the lighting , texture detail, and atmosphere. Here's a version of the street render at dawn: And another version of the street in a European style, on a sunny day, with strong lighting and shadows: Final image: I also wanted to try generating a night time image, with a London feel to it. I kept the style reference low as I wanted to steer more towards the prompt more and interpret 'London' more. A quite London street at night: I wanted to turn a different rend...

ComfyUI IC-Light relighting exploration

Image
I've started using  ComfyUI  to gain more control over my workflow.  Its  node-based system  is similar to InvokeAI. I'm exploring IC-Light (image relighting) and it's a great tool in visual development for composition, lighting, mood and style. Since it runs on an SD1.5 model, it's not quite ready for a final DMP output yet, but I see lots of potential for visual development. For example, it is quick at giving you a location with different lighting scenarios. I was also able to exactly control the direction of the lighting, the colour and intensity. Using my own photos, I'm able to quickly create multiple different moods for art direction. For my workflow, I mixed different colour gradients with my image, controlling the light direction and tone of the scene. Original photo: My IC-Light generations: Original photo: My IC-Light generations: Example of light direction control with a spline: Adding some colour to the gradient: Output example: Original photo: My IC...