Art is an interesting and complex discipline. In fact, creating an artistic image is often not only a time-consuming problem but also requires great expertise. If this problem holds a 2D image, consider extending it to dimensions beyond the image plane, such as time (with animated content) or 3D space (with sculptures or virtual environments). This introduces new obstacles and challenges, which this paper addresses.
Previous results related to 2D processing focused on frame-by-frame segmented video content. The result is that individual frames achieve high resolution but often result in bright artifacts in the produced video. This is due to the temporal incoherence of the generated frames. Also, they do not analyze the 3D environment, which will increase the complexity of the work. Other projects that focus on 3D design suffer from inaccurate geometric reconstructions of cloud or triangle meshes and lack of model definition. The reason lies in the different geometric characteristics of starting the mesh and creating the mesh, as the model is applied when the change is made.
The proposed technique, called Artistic Radiance Fields (ARF), can transfer artistic features from a single 2D image to a real-world 3D scene, leading to faithful rendering of artistic vision in images. input type (Figure 1).
For this purpose, the researchers used a bright field of photography that was reconstructed from many images of the world’s cinema in a beautiful field of vision that supports a highly stylized rendering from a visionary point of view. The results are shown in Fig. 1.
For example, it is given in the introduction of paintings of real sights and images of Van Gogh’s famous “Starry Night” painting according to the “type” to be applied, the result is a bright miner with a smooth appearance like the painting.
The ARF pipeline is shown in the picture below (Figure 2).
The key to this architecture is the combination of Loss Nearest Neighbor Matching (NNFM) requirements and color rendering.
NNFM involves a comparison between the feature maps of both the modeled and modeled maps, generated using the well-known VGG-16 Convolutional Neural Network (CNN). In this way, the features can be used to guide the transfer of consistently high-complexity visual information across multiple perspectives.
Color transfer is a technique used to avoid apparent color mismatches between embedded concepts and typefaces. It displays the linear transformation of the pixels forming the input image to match the path and alignment of the pixels in the model.
In addition, the architecture uses a back-propagation technique, allowing for lossy computations in complete images with a low load on the GPU. The first step is the image processing at full resolution and the image loss and gradient calculation in terms of pixel color, which produces a preserved gradient image. Then, these cache gradients are also propagated back-wise for the collection process.
This method, ARF, presented in this book brings many advantages. First, it leads to the creation of a beautiful stylized image almost without anything. Secondly, the best image can be created from a new view with only a few input images, making the 3D rendering of the art. Finally, using a back-scattering technique, the architecture minimizes the GPU memory footprint.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'ARF: Artistic Radiance Fields'. All Credit For This Research Goes To Researchers on This Project. Check out the paper, github link and project.
Please Don't Forget To Join Our ML Subreddit
Daniele Lorenzi received his M.Sc. in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua, Italy. He is a Ph.D. Student at the Institute of Technology (ITEC) at the Alpen-Adria-Universität (AAU) Klagenfurt. He currently works at the Christian Doppler Laboratory ATHENA and his research interests include adaptive video streaming, immersive media, machine learning, and QoS/QoE analysis.