Posts

3D render refiner

Image
I have been running 3D renders through my GenAI setups and turning them into more photoreal images, while exploring how SDXL can refine lighting, textures, and mood. I start with my basic render: Here I'm using LeonardoAI and decided to turn the render into a street in autumn. I am using content reference to match the my output image to the render, and a style reference of a New York street in autumn (generated text to image for photorealistic quality). The final image: I wanted to try different styles and change the lighting , texture detail, and atmosphere. Here's a version of the street render at dawn: And another version of the street in a European style, on a sunny day, with strong lighting and shadows: Final image: I also wanted to try generating a night time image, with a London feel to it. I kept the style reference low as I wanted to steer more towards the prompt more and interpret 'London' more. A quite London street at night: I wanted to turn a different rend...

ComfyUI IC-Light relighting exploration

Image
I've started using  ComfyUI  to gain more control over my workflow.  Its  node-based system  is similar to InvokeAI. I'm exploring IC-Light (image relighting) and it's a great tool in visual development for composition, lighting, mood and style. Since it runs on an SD1.5 model, it's not quite ready for a final DMP output yet, but I see lots of potential for visual development. For example, it is quick at giving you a location with different lighting scenarios. I was also able to exactly control the direction of the lighting, the colour and intensity. Using my own photos, I'm able to quickly create multiple different moods for art direction. For my workflow, I mixed different colour gradients with my image, controlling the light direction and tone of the scene. Original photo: My IC-Light generations: Original photo: My IC-Light generations: Example of light direction control with a spline: Adding some colour to the gradient: Output example: Original photo: My IC...