-
Notifications
You must be signed in to change notification settings - Fork 1
Experiments
Processing LAS files into point cloud partitions
In order to run the experiments, it is necessary to begin by partitioning the LAS file you desire to run the experiments on for processing. This process splits up the LAS file's point cloud data into many smaller point clouds, and stores them in individual Point Cloud Persistence Diagram Storage objects, or PCPDS for short, under a common directory who's name is determined by the LAS file input and partition count.
- run the command
python generate_pcpds_files.py
- Enter the LAS filename, assuming it is in the default project directory.
- Enter the desired partition count per axis.
- Specify whether or not you would like to utilize Multiprocessing in an attempt to speed up the partitioning
- NOTE: This is often much slower for smaller data sets due to the start up time of multiprocessing allocation in python.
-This will generate a folder in the
collection_path
directory by default, with the naming schema of 'LAS filename + partition count'
- NOTE: This is often much slower for smaller data sets due to the start up time of multiprocessing allocation in python.
-This will generate a folder in the
Generating persistence diagrams for a collection of point cloud partitions
The following experiments also require persistence diagrams to be generated for the pcpds objects saved in the collection referenced, however the guide above is only responsible for partitioning the data, only setting the Point Cloud
portion of the pcpds object, so it is without a persistence diagram initially. This is separated into another function for added flexibility of being able to run another filtration method on the same collection without having to reparation the point cloud first.
- run the command `python generate_persistence_diagrams.py
- Enter the collection directory name when prompted, it will be in the format LAS filename + "_" + partition count
- Select the filtration method you would like to use to generate the persistence diagram
- Specify whether or not you would like to utilize Multiprocessing in an attempt to speed up the process
- NOTE: This is often much slower for smaller data sets due to the start up time of multiprocessing allocation in python.
- As a result, the collection will now contain pcpds objects with persistence diagrams, enabling the use of all experiments on the collection.
- You can verify this with the
verify_filtration.py
python script by entering the collection name to confirm the filtration type, and that it is consistent throughout the entire collection.
- You can verify this with the
Experiment 1 focuses on locating a point cloud from a larger point cloud, that is a partition of said larger point cloud.
Experiment 2) Point Cloud slider
- Locate point cloud with noise applied in the form of Gaussian Error
- Locate a rotated point cloud from a larger point cloud