Skip to content

Experiments

Geo3ngel edited this page Aug 6, 2019 · 17 revisions

Preparation for running experiments

Processing LAS files into point cloud partitions

Explanation:

In order to run the experiments, it is necessary to begin by partitioning the LAS file you desire to run the experiments on for processing. This process splits up the LAS file's point cloud data into many smaller point clouds, and stores them in individual Point Cloud Persistence Diagram Storage objects, or PCPDS for short, under a common directory who's name is determined by the LAS file input and partition count.

To Process a LAS file into a PCPDS object collection, do the following:

  • run the command python generate_pcpds_files.py
  • Enter the LAS filename, assuming it is in the default project directory.
  • Enter the desired partition count per axis.
  • Specify whether or not you would like to utilize Multiprocessing in an attempt to speed up the partitioning
    • NOTE: This is often much slower for smaller data sets due to the start up time of multiprocessing allocation in python. -This will generate a folder in the collection_path directory by default, with the naming schema of 'LAS filename + partition count'

Generating persistence diagrams for a collection of point cloud partitions

Explanation:

The following experiments also require persistence diagrams to be generated for the pcpds objects saved in the collection referenced, however the guide above is only responsible for partitioning the data, only setting the Point Cloud portion of the pcpds object, so it is without a persistence diagram initially. This is separated into another function for added flexibility of being able to run another filtration method on the same collection without having to reparation the point cloud first.

In order to generate persistence diagrams for every pcpds object in a collection, do the following:

  • run the command `python generate_persistence_diagrams.py
  • Enter the collection directory name when prompted, it will be in the format LAS filename + "_" + partition count
  • Select the filtration method you would like to use to generate the persistence diagram
  • Specify whether or not you would like to utilize Multiprocessing in an attempt to speed up the process
    • NOTE: This is often much slower for smaller data sets due to the start up time of multiprocessing allocation in python.
  • As a result, the collection will now contain pcpds objects with persistence diagrams, enabling the use of all experiments on the collection.
    • You can verify this with the verify_filtration.py python script by entering the collection name to confirm the filtration type, and that it is consistent throughout the entire collection.

Experiment 1) Locating a local point cloud

Summary

Experiment 1 focuses on locating a point cloud from a larger point cloud, that is a partition of said larger point cloud.

How to run:

Experiment 2) Point Cloud slider

Summary

How to run:

Experiment 3) Locate point cloud with noise applied

Summary

  • Locate point cloud with noise applied in the form of Gaussian Error

How to run:

Experiment 4) Locate a rotated point cloud

Summary

  • Locate a rotated point cloud from a larger point cloud

How to run: