Current Issue Cover
  • 发布时间: 2024-11-27
  • 摘要点击次数:  9
  • 全文下载次数: 5
  • DOI:
  •  | Volume  | Number
[chinagraph2024]轻量级实时渲染参数优化方法

古亦平1, 谭柏君1, 徐翔2, 王璐1(1.山东大学;2.山东财经大学)

摘 要
目的 随着数字孪生、虚拟现实等技术的普及,人们对画质和流畅性的需求不断提高。然而,受到关键性能硬件的制约,个人电脑或移动设备往往需要通过调整游戏或渲染引擎中的各项参数来提高帧率,而这必然会造成渲染质量损失。如何设置合理的渲染参数,在降低时间开销的同时,实现更高的渲染质量,成为图形应用领域广泛关注的问题。方法 本文提出了一种通用的轻量级实时渲染自动参数优化方法,使用极致梯度提升(extreme gradient boosting,XGBoost)对虚拟场景渲染时不同参数的渲染时间和图像质量进行建模,在预计算后,模型被简化为查找表(look up table,LUT)。在实际渲染时根据硬件状态、场景信息等条件使用LUT自动调整渲染参数,在减少渲染时间的同时保证渲染质量。结果 该方法能够应用于游戏、渲染引擎中的各类渲染技术。本文分别在次表面散射和环境光遮蔽效果进行应用和测试。结果表明,与最佳的渲染参数相比,使用本文方法的次表面散射渲染时间缩短40%左右,环境光遮蔽渲染时间减低70%左右,而图像误差均仅增加2%左右。结论 本文提出的方法在减少渲染时间的同时,能够保持较高的渲染质量,具有良好的实用性,适用于游戏和渲染引擎中的各类渲染技术。下面是我们的代码仓库:https://github.com/LightweightRenderParamOptimization/LightweightRenderParamOptimization。
关键词
[chinagraph2024]Lightweight parameter optimization for real-time rendering

GuYiping, TanBaijun, XuXiang1, WangLu2(1.Shandong University of Finance and Economics;2.Shandong University)

Abstract
Objective With the rapid development of Virtual Reality, Augmented Reality and Digital Twin technologies, these innovations have not only transformed the way people perceive virtual worlds but have also greatly advanced graphics rendering techniques. In these emerging fields, the quality of user experience directly depends on the realism and interactivity of the virtual world, making high-quality graphics and smooth performance indispensable. While high-quality rendering has made significant progress in PC and console games, applying these techniques to mobile devices, such as laptops, tablets, and smartphones, remains a significant challenge. Mobile devices are considerably limited in processing power, graphics capabilities, and memory compared to high-end PCs and dedicated gaming consoles. Additionally, mobile devices require long battery life and cannot afford the high power consumption typical of desktop systems. As a result, achieving high-quality, low-latency rendering under constrained hardware conditions is a major challenge. In modern game and rendering engines, a variety of rendering techniques (such as subsurface scattering, ambient occlusion, screen space reflections, normal mapping, etc.) are integrated into a complex rendering pipeline. These techniques often come with numerous adjustable parameters, such as the number of scattering samples, shadow precision, reflection intensity, ambient occlusion level, texture resolution, and level of detail. These parameters significantly affect image quality but also directly impact rendering computation and time costs. Therefore, finding an optimal balance between image quality and rendering time is critical for optimizing rendering parameters. Typically, these parameters are manually configured by developers based on different hardware environments and scene requirements. Developers often rely on trial and error, adjustment, and visual feedback to optimize these parameters for ideal rendering performance and quality. This manual approach is inefficient, error-prone, and becomes nearly impossible when dealing with complex 3D scenes and dynamic game environments. Moreover, as games and virtual reality technologies evolve, real-time rendering must complete large amounts of complex calculations for each frame. Any misconfiguration can lead to performance bottlenecks or distorted visual effects. For example, shadow rendering precision may be crucial in some scenes but can be reduced in others to save computational resources. If these parameters cannot be dynamically optimized in real-time, the rendering engine might overuse resources in certain frames, leading to frame rate drops or increased latency, which severely affects user experience. To address this issue, researchers in recent years have explored various methods for optimizing rendering parameters. These include sampling scene space using octrees, leveraging Pareto frontiers to find locally optimal parameters, using regression analysis and linear functions to quickly fit low-power parameters, or employing neural networks to estimate performance bottlenecks in real time based on drawcall counts. While these methods have achieved some success in rendering optimization, they still have significant limitations. First, function-fitting methods are prone to errors across different scenes, making generalization difficult. Second, the complexity of neural network inference introduces substantial computational overhead. Each time the neural network is used for parameter prediction, it adds extra computational burden. In real-time rendering, any delay can negatively affect performance. Consequently, existing neural network-based optimization methods often perform parameter prediction every few dozen frames instead of calculating the optimal parameters for every single frame. This non-real-time parameter updating is particularly problematic in dynamic scenes where the complexity of the scene and camera view may change drastically at any moment. Neural networks may fail to respond to these changes promptly, compromising rendering stability and image quality. For instance, when the camera moves quickly, the objects and lighting in the scene may undergo significant changes, rendering the previous parameter predictions obsolete, leading to visual artifacts or frame rate fluctuations, which in turn degrades the user experience. Method To address these issues, this paper proposes a lightweight, real-time automatic rendering parameter optimization method. The proposed method is computationally efficient and allows for adaptive per-frame rendering parameter updates, ensuring consistency in rendering after parameter adjustments. The method is divided into three stages: model training, pre-computation, and adaptive real-time rendering. In the model training stage, various rendering parameters, hardware configurations, and scene information are used within a virtual environment to collect data on rendering time and image quality. This data is then used to train the model, which is divided into two parts: one for evaluating rendering time and the other for evaluating image quality. This separation enables the model to fully explore the intrinsic relationships between parameters, rendering time, and image quality. Additionally, the specially designed virtual scenes provide sufficient sample information, allowing the model to generalize to new scenes. In the pre-computation stage, the key step is to first assess the real-time hardware information of the device, including the processor, graphics card, and other performance parameters. This step is completed during scene loading to ensure that rendering parameter optimization can be customized based on the specific performance of the device. Subsequently, the system simplifies the optimization problem of rendering time and image quality from a two-dimensional multi-objective optimization problem to two independent one-dimensional linear search tasks. This significantly accelerates the pre-computation speed, as linear search is far simpler than complex optimization in two-dimensional space. Specifically, there is typically a trade-off between rendering time and image quality, and optimizing these two factors requires finding a balance among many parameter combinations. To simplify this process, the system decomposes it into two independent one-dimensional linear search tasks. First, within the given rendering time threshold (set to the fastest 20% in this paper), the system searches for the optimal rendering time settings achievable under the current hardware conditions. Next, the system searches along the image quality dimension, ensuring that rendering time does not increase significantly, to find rendering parameters that maximize image quality. By employing this two-step search strategy, the system effectively balances rendering time and image quality while ensuring the optimization process is both efficient and accurate. Once optimization is completed, the resulting model is simplified into a lookup table (LUT), which records the optimal rendering parameter combinations for different hardware configurations. This LUT is tailored according to the device's hardware parameters, ready for use in the subsequent real-time rendering phase. In the adaptive real-time rendering stage, before rendering each frame, the system quickly retrieves the optimal rendering parameter settings from the pre-generated LUT based on the current hardware status and scene information. The lookup speed of the LUT is extremely fast, significantly reducing the computational overhead compared to real-time parameter calculation. This allows the system to complete parameter selection within milliseconds and immediately apply these parameters for rendering. The process ensures both efficiency and flexibility in rendering. By completing extensive pre-computation tasks in advance, the system only needs to perform simple lookup operations during actual rendering, achieving a balance between high-quality rendering and fast responsiveness. Ultimately, the selected parameters are applied directly to the rendering of the current frame, ensuring that each frame achieves the optimal result based on the hardware performance and scene requirements. Result The experimental results show that when compared with neural networks and LightGBM models applied to subsurface scattering and ambient occlusion rendering techniques, the proposed method demonstrates advantages across multiple dimensions, including image quality, scene dependency, rendering time, and model performance. Specifically, in various scenes, the proposed method reduces subsurface scattering rendering time by approximately 40% and ambient occlusion rendering time by about 70%, with only around a 2% increase in image quality error. Additionally, the real-time inference time per frame is less than 0.1 milliseconds. Conclusion The proposed method effectively reduces rendering time while maintaining high rendering quality, making it highly practical for the actual demands of modern games and rendering engines. The implementation can be accessed at the following link: https://github.com/LightweightRenderParamOptimization/LightweightRenderParamOptimization.
Keywords

订阅号|日报