Autonomous Rover Testing Simulator in Unreal Engine 5
View the Project on GitHub MissouriMRDT/RoveSoSimulator
Return to RoveSoDocs Guides for Today, Tomorrow, and Forever.
Project Goal: To develop a robust, high-performance tool within Unreal Engine capable of generating large-scale, georeferenced lidar point clouds. The primary objective is to emulate the format and structure of real-world USGS lidar datasets to facilitate high-fidelity “Sim-to-Real” testing for robotics applications.
The core challenge of this project was to scan a massive (4km x 4km) virtual landscape and export billions of data points without running out of memory or freezing the editor for an unacceptable amount of time. This required several key architectural decisions.
1.1: Core Technology: Grid-Based Ray Tracing in C++ To achieve the highest fidelity, we opted for a Grid-Based Ray Tracing method. This technique simulates an aerial lidar scanner by casting a dense grid of parallel rays from the sky downwards. The primary logic was implemented in C++ for three critical reasons:
double precision to avoid significant errors over large distances. C++ handles doubles natively, whereas Blueprints primarily use floats.1.2: User Interface: The Editor Utility Widget
While the core logic is C++, the user interface is a Blueprint Editor Utility Widget (EUW_LidarScanner). This provides a friendly, visual front-end that allows any team member (not just programmers) to use the tool. The Blueprint’s role is simply to gather user inputs and pass them to the C++ backend.

1.3: The Memory Challenge: A Chunking (Tiling) System Our initial prototype attempted to generate and store all points in a single large array in memory. This approach failed due to extreme memory consumption (roughly 1GB of RAM per million points), making it impossible to scan the full map.
1.4: The Performance Challenge: Parallel Processing The chunking system solved the memory issue, but processing hundreds of chunks sequentially was still time-consuming.
ParallelFor task system, which distributes the processing of all chunks across all available CPU cores. This change resulted in a massive, near-linear speedup, reducing scan times from hours to minutes.1.5: The Export Format: Intermediate CSV for CloudCompare
Directly exporting to the complex, binary .laz format would require integrating a large, third-party C++ library, a significant and often fragile engineering task.
.csv) text file. This has several advantages:
.laz file.The core of the system resides in the LidarScannerLibrary C++ class, which contains two primary functions.
2.1: The “Manager” Function: ScanWorldInChunks()
This is the high-level function called by the Blueprint UI. Its responsibilities are:
GeoReferencingSystem actor exists in the world._CRS_Info.txt metadata file containing the UTM zone (Projected CRS) for the dataset.TArray) of “task” structs, where each task contains the specific bounds and output filename for one chunk.ParallelFor system to be executed across multiple CPU threads.2.2: The “Worker” Function: ScanAndExportChunkToCSV()
This function is executed by each thread from the ParallelFor loop. It performs the actual work on a single chunk:
GeoReferencingSystem’s EngineToProjected function to convert the location to FProjectedCoordinates (Easting, Northing, Altitude).FLidarPoint struct to its temporary array..csv file on disk using FFileHelper::SaveStringArrayToFile.This guide outlines the complete process for a user to generate a point cloud.
Step 3.1: Scene Preparation
GeoReferencingSystem actor is present in the level. The Projected CRS and Origin Location must be set correctly.Empty Actor objects to the level. Position them at opposite corners of the desired total scan area.
Step 3.2: Configure and Run the Scanner Tool
EUW_LidarScanner asset and select Run Editor Utility Widget.Min Bounds Marker and Max Bounds Marker fields.100 to 500 is recommended. Smaller chunks use less RAM.Output Log.Step 3.3: Final Processing in CloudCompare
.csv files and drag them simultaneously into the CloudCompare window.DB Tree view, select all the newly imported layers. Navigate to Tools -> Merge. This will combine all chunks into a single entity..laz for compressed, .las for uncompressed) and save the final output file. The metadata from the _CRS_Info.txt file can be used to set the coordinate system information in the target application.