Artificial intelligence is transforming the creative industry, offering new ways to generate, iterate and visualise concepts at unprecedented speed. However, the use of AI raises essential questions around authorship, accuracy, process integrity, environmental impact and artistic identity.
This framework provides guidelines to integrate AI into research-driven visualisation projects without completely compromising creative direction. Rather than replacing craftsmanship, AI becomes a collaborator, supporting an iterative workflow and enabling a new way of producing images, this, while navigating between the artist’ creative vision & authorship, and the speed & iterative quality AI can bring to the table.
1. Define Purpose
Before integrating AI into a visual it is essential to clarify why AI is being used and what part it will play in the creative process. The introduction of AI does not automatically improve the quality of the image. However, AI can accelerate visual ideation and can help during the creative process to iterate or to obtain more diverse concepts. Nevertheless, it also raises important questions related to authorship, intellectual transparency, cultural responsibility, and the reliability of historically inspired content. For this reason, it is essential to identify what the visual needs to communicate. Is the main purpose to inform, move, convince or convey a certain message? Depending on this message certain uses of AI might conflict or impede the message. For example, creating a visual for an environmental advertisement with the help of AI would undermine the message due to the ecological impact AI has on nature.
To counter this, the early definition of the role of AI within the project should be determined. The question should not merely be “How can AI help?” but rather “How will AI be used in this usecase?”
- As a sketching tool?
- As a reference generator?
- As a production assistant?
- As a post-processing tool?
- As a part of the final artefact itself?
- As a tool for refinement in postproduction?
- As an animation tool?
- As an image/video upscaler?
Furthermore, it is also important to establish boundaries in the use of AI during the project. Defining this prevents confusion later in the workflow and ensures the usage of AI is aimed towards reaching better results or optimising the workflow all while the human creative process remains central. Treat AI output as a starting point not a source a final result. All AI-generated content that is visible in the final result should be verified with reliable academic sources.
Document the use of AI within the project with a journal or annotations as well as how this decision might influence or completely remove (creative) authorship. Mentioning this in the accrediting of the work, insures clarity, archive value, and ethical transparency.
_____________________________________
Context and Accuracy
AI models interpret rather than understand. Their output is based on patterns, not reasoning or expertise. Early experiments within the project can demonstrate both potential and risk: stunning visuals can quickly be created, yet they often introduce inaccuracies such as anachronistic materials, distorted figures, perspective errors and more. Additionally guaranteeing continuity across multiple visuals can often be tricky. This further underlines the value of this early experimentation phase as it can identify future difficulties before they arise as well as familiarise the artist with the particularities of the specific AI tool. During this testing phase it is important to try to keep in mind that every AI-generated element should be editable and replaceable as this will support a non-destructive workflow throughout the project, allowing for an iterative creation process.
2. Build a Hybrid Workflow
Once the project is defined as well as which part AI will play in it, the second step is to build a workflow matching the previously defined parameters. For this, some guidelines are outlined below:
Lock key variables early:
- Camera angles, horizon line, scale, orientation, light direction, asset proportions…
Use AI selectively for:
- Mesh variation, texture exploration, background models…
Maintain a versioning system:
- Track decisions, inputs, prompt variations, and technical evolution across iterations.
This prevents visual drift across the project and enables precise comparison between the different versions of the visuals.
_________________________________________
AI in Pre-production
During pre-production AI can help artists quickly generate concept sketches, mood boards, and environment ideas, allowing teams to explore more visual options early on. AI tools can also assist with drafting story elements, character backgrounds, and world-building notes, supporting the creative process. However, pre-production should still be driven primarily by human creativity and decision-making, with AI serving as a supportive tool rather than a replacement. By combining human imagination with AI-assisted exploration, teams can ideate quickly on multiple concepts without creative drain.
_________________________________________
AI in Production
In production, the use of AI can be useful when employed for the fast generation of back- and midground models. Many applications are already able to generate high-fidelity models from prompts or pictures. Even complex objects can already be generated with decent results. For realism, however, it is advised to use picture-to-model generation for better results. Additionally, most of these apps are already capable of reducing the polycount to a decent amount. Drawbacks to this system are the need for paid subscriptions and models that are still quite ‘heavy’ when maintaining a certain amount of detail.
_________________________________________
AI in post-production
In post-production, major changes are often avoided; however, in certain research-based projects they can become unavoidable due to late-stage, project-related discoveries. Fully re-rendering the scenes can take too much time due to high rendering times. Instead, an easy way to implement big changes in post is with AI-based post-production tools. This allows artists to update major scene elements or small details without repeating the entire rendering process. With integrated AI tools within classic art programs such as Photoshop (Firefly AI) these changes are quite feasible.
_________________________________________
Examples of these late-stage AI edits:
- Correcting vegetation type or placement
- Adjusting lighting nuance or atmosphere
- Updating texture detail, material fidelity or scale
- Removing placeholders or historically inaccurate elements
Furthermore, AI can serve as a good upscaling tool to improve low-quality renders easily. This enables the artist to render in lower resolution and upscale these in post. Especially with large landscapes with many ‘heavy’ elements such as vegetation this can be a useful step to save computing time when only one pc is available. For this it is advised to divide the render into smaller pieces and put this true an upscaling tool (e.g. KREA AI). This prevents unintended changes, improves consistency, and allows for larger upscaling resolutions. When the image is outputted, it can simply be stitched back together in photoshop and unwanted elements removed.
3. Evolving static visuals to animated clips
To make visuals more appealing and immersive it is possible for the artist to retro-actively convert the static image to an animated clip with the help of AI. It is also possible to do this manually with motion graphics, but this is quite labour intensive. Animating the ‘classical way’ is also an option however, this needs to be decided in the beginning of the project, whereas in this use case a look is taken into animating after the fact. This can sometimes be prompted by late-stage changes in the research or project so having a strategy ready for these cases can save a lot of time.
To animate a static visual a multitude of tools are available, AI-assisted motion tools such as KREA or EbSynth-type workflows. After deciding which on which tool to use, it is important to familiarize yourself with the program in order to work with the limitations as these can heavily influence the process. As before, it is easier to cut up the visual before animating as this allows for better AI generations and higher resolutions. To avoid visible seams, cutting up the segments with overlap helps blending in the later phase. Movements that are relatively easy to make with AI-assisted motion tools are subtle movement such as tidal shifts, smoke, wind pushing vegetation… Masking out elements in the visuals before inputting them in the AI tool is not necessary and, in some cases, even make the integration of said footage more difficult.
Another challenge that can come up is the occurrence of colour shift in the generated footage. This often happens across different models in applications such as KREA AI. After a few seconds of footage, the colour shifts to different shades, more shadows or other. This makes is impossible to seamlessly loop the video without proper colour grading.
Other motions such as cloud movement or weather changes are more straightforward to composite into the footage as changing these with AI overlays the whole footage. This makes it impossible to later make tweaks within parts of the scene. Additionally blending this footage between different ‘puzzle pieces’ is very challenging. For these reasons it is easier to only composit these weather changes last and do so manually.
_________________________________________
General guideline:
- Apply AI motion to segmented layers rather than full renders
- Use one model or method per motion category:
- Vegetation, Atmosphere, Water systems, Human, animal motion
- Reassemble layers using compositing
- Correct flicker, colour shifting, and visual artifacts.
- Avoid visible seams, lighting discontinuities or style mismatches between layers.
4. Reflection: Ethics, Ownership & Creative Responsibility
AI introduces new forms of artistic authorship and responsibility — both ethically and culturally. In this case specifically it is used to speed up the workflow but can have real visual contributions in the final visualisation. When using AI, it is important to define and reflect on the role of AI within the project and whether or not it is necessary to disclose it. When using it in pre-production only or as a tool to speed up the workflow without visual output in the final result it can be argued that the explicit mention of AI is not necessary. However, when AI is a major part of the production workflow and/or the final result, then, the public must understand when, how, and why AI was used, and where its influence appears.
Key Considerations:
- Dataset transparency and attribution
- Environmental cost of computation
- Human authorship vs. automated generation
- Communicating where AI influenced the final work
_________________________________________
Conclusion
AI does not eliminate artistic expertise it cannot be used in a meaningful way without it. In artistic (research driven) project AI can be used as a ‘helping hand’ to speed up the workflow or enable more testing or iterations within the project. To conclude; AI accelerates exploration while Human vision remains the author.
_________________________________________
Want to see a usecase where this was used? Watch the video below and go to: https://daeresearch.be/spioenkop/?preview_id=5405&preview_nonce=b93bed83a2&preview=true