During Project Reviews as a consultant for the Customer Success team, I often work with customers that create game-switching applications. These applications have one main menu or theme menu, presenting multiple choices of games for the player to choose. In those setups, the main concerns are how to ensure that the time between switching games is as short as possible and how to ensure optimal performance across the games. In this blog post we will explore different approaches based on project needs as well as some best practices that can be useful for any game environment, with or without a game-switching setup.When planning for a multi-application environment–whether for gaming, entertainment, or industrial simulation–the most important decision to make is how to manage game executables. There are many factors that can influence this decision:How many games will the platform handle?How big are the games?Are the games made with the same Unity versions? What are the application's bottlenecks?Other factors are target hardware, memory and CPU, and disk speed (SSD vs HDD vs SD Card).Answering these questions and deciding how to handle executables is crucial to understand whether we need separate executables for each game; one shared executable for multiple games, or a combination of both to ensure the applications perform optimally.Having multiple executables is a great option to handle games that are made with different Unity versions. With this approach it’s possible to reduce the time to switch between games by caching the executable in the memory, and leaving each instance in the background. However, keeping all executables in the memory is not always the best choice since it can be straining on memory. It should be avoided in cases where the individual games have a higher memory footprint, and/or when there are many games in the game switching application.To ease memory constraint, it is possible for games to share a single executable. The games can be in a single Unity project, or each have their own project, as long as the games share the same Unity version. Since Unity 2022 LTS in Windows it’s possible to use the -datafolder argument to pass a variable path via command line ( -datafolder ), specifying the selected games data folder in order to switch change. One potential disadvantage of this approach is slower game switching times; therefore it’s important to follow loading best practices to reduce this drawback.No matter the nature of the game we’re developing or on which platform, it’s important to spend as little time as possible from the moment of game selection until it’s fully loaded on the screen. This goal becomes particularly important for game switching applications.A great way to handle loading is by using Addressables. With Addressables, contents are downloaded and released on a need basis. This deferred loading strategy is the most efficient way to reduce load times for games since it limits the amount of data that has to be loaded during initial startup. Furthermore it can help prevent any CPU background activities related to background games, which can contribute to CPU bottlenecks. Addressables: Planning and best practices blog post is a great starting point to learn more about addressables and how they can help improve your game.A great way to ensure faster loading, regardless of how many executables we’re using, is via the asynchronous loading APIs. When loading asynchronously, the Unity main thread will execute a process called “main thread integration” which is responsible for the initialization of native and managed objects in a time-sliced manner. Since this process performs some operations that are not thread-safe it will occur on the main thread, and the time allowed to execute the main thread integration is limited to prevent the game from freezing for a long time. The amount of time that can be spent on the integrations is defined by the Application.backgroundLoadingPriority property. We recommend setting the backgroundLoadingPriority to High, or 50 ms, during loading screens and then returning it to BelowNormal (4 ms) or Low (2 ms) when loading is complete.An additional way to speed up loading is via Asynchronous Texture Upload. Async texture load can decrease the amount of load time by coordinating how much time and memory is used for uploading textures and meshes to the GPU setting. The Understanding Async Upload Pipeline blog post provides detailed information on how this process works.These practices will help speed up loading times:Minimize your scene content as much as possible. Use a bootstrap scene to load only what’s needed for the game to be in a playable state, then load additional scenes when needed.Disable cameras during loading screens.Disable UI Canvases while they are being populated during loading.Parallelize network requests.Avoid complex Awake/Start implementations and make use of worker threads.Always use texture compression.Stream large media files (like audio files and textures) instead of keeping them in memory.Avoid JSON Serializer, and instead use binary serializers.As mentioned earlier, memory is not the only concern for multi-game environments, background CPU activity is also something that can put a toll on the player's gaming experience. When games are not actively being played, their CPU is still running, causing the active game to perform suboptimally by creating CPU starvation. A way to prevent CPU starvations for the active game, and any other backend platform processes is to set the Run in Background player to false in Unity Settings. Run in Background will cause the Unity game loop to stop while the game is not in focus. The setting can also be changed dynamically via scriptOne thing to note is the Run in Background setting won’t stop any custom scripting threads from running, so it’s important to set to sleep any threads of non-playing games via the Thread.Sleep C# method. Remember that working with background threads in Unity requires careful programming. Since these threads don't have direct access to Unity's API, there can be a greater chance of creating issues, such as deadlocks and race conditions. Preventing this requires proper synchronization with the main Unity thread. To properly implement multi-threading, review the Limitations of async and await tasks section of the Overview of .NET in Unity manual page and the MSDN article about using threads and threading. Unity 6 introduces Awaitable class which offers better support for async/await.It can be difficult and time consuming to identify and fix the causes of memory leaks, especially in the later stages of development. As cliche as it may sound, prevention is always better than the cure. Here are a few recommendations that can help prevent leaks in any game environment:When creating new objects/assets in memory, make sure to delete them when not needed. If using Addressable, make sure to release unused assets.When loading/unloading scenes, assets should be properly removed from memory. Unity doesn’t automatically unload assets when a level is unloaded, therefore it’s important to make sure to remove any access from the memory. The Resources.UnloadUnusedAssets API can help clean up assets. However, it can cause CPU spikes, since it returns an object that yields until the operation is complete, therefore it should be used in non-performance-sensitive places.Avoid frequently using Instantiate and Destroy GameObjects. Doing so can lead to unnecessary managed allocations, while also being a costly CPU operation. However, in cases where using Destroy is necessary, make sure to remove all references to the object to avoid Leaked Shell Objects. When an object or its parents are destroyed via Destroy, a C# code holds a reference to a Unity Object, keeping the managed wrapper object–its Managed Shell–in memory. Its Native Memory will be unloaded once the Scene it resides in is unloaded, or the GameObject it is attached to or its parents are destroyed via Destroy. Therefore, if something else that was not unloaded still references it, the managed memory may live on as a Leaked Shell Object.Be mindful when implementing events using Singletons. Singleton instances hold references to all objects that have subscribed to its events. If those objects do not live as long as the singleton instance, and they do not unsubscribe from these events, they will remain in memory causing a memory leak. If the event source gets disposed before the listeners, the reference will get cleared, and if the listeners are properly unregistered there is also no reference remaining. To solve and prevent this problem, we recommend implementing the Weak Event Pattern or IDisposable in all objects that listen to singleton events, and make sure they are properly disposed of in your code. The Weak Event Pattern is a design pattern that helps you manage memory and garbage collection in event-driven programming, particularly when it comes to long-lived objects. It's especially useful when you have subscribers that are short-lived, but the publisher is long-lived. Please keep in mind these are C# specific solutions and work only with C# events and are not directly supported by UnityEvents or the Unity UI Toolkit. As such, we recommend implementing these solutions only in your non MonoBehaviour scripts.Lastly, profiling, performing CI/CD testing and stress testing from the early development stages can be a real time saver, since detecting leaks as they arise will allow you to promptly address the issue, saving time in debugging, and ensuring optimal performance.
원문출처 : https://unity.com/blog/optimize-game-menu-for-faster-loading
원문출처 : https://unity.com/blog/optimize-game-menu-for-faster-loading