-
-
Notifications
You must be signed in to change notification settings - Fork 366
Development Guidelines and Standards
The 2.0 Development is being done loosely follow the follow guidelines and standards. The source tree for 2.0 development is in branch develop. The source tree has been restructured to modularize each major algorithm or function into its own sub-directory tree with the idea that this will allow multiple developers to work in the tree and add to the tree with minimal global interference with other developers.
git checkout develop
ls
cmake/ - cmake files for pgrouting
CMakeLists.txt - Top level cmake file the project
doc/ - Top level doc, place to add doc not specific to src algorithms
themes/ - Sphinx theme for doc
static/ - images needed in the documentation
src/ - this is the main src tree
astar/ - A* search algorithm
common/ - common files needed across the pgrouting project
dijkstra/ - Dijkstra algorithm
driving_distance/ - Driving Distance application
shooting_star/ - Shooting star **DEPRECATED** This will likely get removed.
trsp/ - Turn restricted shortest path
tsp/ - Traveling Salesman problem solver
tools/ - Miscellaneous tools for mingw, the test running, etc
With each of the src
subdirs you will find a doc
, sql
, src
, and test
subdir. This is our standard layout.
-
doc - all documentation for this specific algorithm in ReStructuredText format. This should cover all the user exposed functions, and describe all the input parameters and output structures. And any additional documentation so that users can understand how it works and how to set it up for their own use.
-
sql - this should contain the sql wrappers for the C code and any type definitions that are not already in
src/common/sql/pgrouting-types.sql
. If you have type definitions that are used by multiple algorithms, then those should get added tosrc/common/sql/pgrouting-types.sql
so they only get added once, otherwise it will generate an error when install the extension. -
src - this is the C/C++ code for the algorithm.
-
test - this contains the
test.conf
and related*.data
,*.test
, and*.rest
files to test the algorithm.
We have run into multiple issues where the C++ code crashes the server back-end. This is obviously very BAD. There are some simple things we can do to prevent this.
- All C++ that is called from C MUST have a try-catch block wrapped around the body of the function like:
try {
<body of function>
}
catch(...) {
*err_msg = (char *) "Caught and unknown exception";
return -1;
}
A slightly better variant of this would be:
#include <exception>
...
try {
<body of function>
}
catch(std::exception& e) {
*err_msg = (char *) e.what();
return -1;
}
catch(...) {
*err_msg = (char *) "Caught and unknown exception";
return -1;
}
In general, the C++ code needs to be review and appropriate finer grain try blocks need to be added to check all new() requests, divide by zero, array index errors and any other potential errors with the catch block freeing memory as we unwind. We might want to establish an application exception the can be thrown caught for additional cleanup and rethrown as needed. Inside the postgresql database it is important that we do not leak memory during error recovery.
While we do not have a nice clean mechanism for throw and catching errors in the C language, it is important that we also check for errors and clean up as we return through the calling stack.
I would like to see us move to a standard template that uses information mapping structure like the following outline:
- What is it?
- This is a short overview or description of the algorithm.
- Why do I care?
- A brief description of what are the feature and benefits of this algorithm.
- Why I might want to use it over one of the other ones.
- How does it work?
- This is an overview of how the algorithm works
- what are the inputs (high-level concepts here)
- what can be expected as results.
- This is a high level discussion not all the details.
- How do I make it work?
- This is the detailed documentation and should discuss:
- how to setup data for the inputs
- what are the queries and variations
- what are the expected results
- what can I expect in the way of error conditions, and suggested fixes
- a simple example might be appropriate.
- (Optional) Additional reference material and links
The current test infrastructure (and this could be improved upon if someone has a better idea) is comprised of a perl script tools/test-runner.pl
that is run from the top level of the project tree. It locates all the test.conf
files located in the various src/<algorithm>/test/
directories and loads them and runs the described tests and reports the results. The report still needs some formatting.
In a given algorithm's test directory there needs to be:
-
test.conf
- perl data structure defining what tests to run -
test00.data
- All *.data are loaded at once so don't have table name collisions -
test01.test
- This is just a SQL query and the results will get compared to test01.rest -
test01.rest
- Expected results of test01.test -
test02.test
- This is just a SQL query and the results will get compared to test02.rest -
testo2.rest
- Expected results of test02.test - etc
For your *.data
files, it would be wise to DROP TABLE IF EXISTS mytesttable;
before you create it incase there is one left over from a previous test case.
To create an appropriate *.rest
file, you should run your *.test
like this
psql -U postgres -h localhost -A -t -q -f mytest.test testdb > mytest.rest
The test.conf
file for trsp looks like this:
#!/usr/bin/perl -w
%main::tests = (
'any' => {
'comment' => 'TRSP test for any versions.',
'data' => ['trsp-any-00.data'],
'tests' => [qw(trsp-any-00 trsp-any-01 trsp-any-02 trsp-any-03)]
},
# 'vpg-vpgis' => {}, # for version specific tests
# '8-1' => {}, # for pg 8.x and postgis 1.x
# '9.2-2.1' => {}, # for pg 9.2 and postgis 2.1
);
1;
The 'any'
key say this test is good for any version, my idea is that you could also specify tests that are specific for a given versions where vpg is the PostgreSQL version truncated to major and minor versions and vpgis is the PostGIS version. (I'm not sure how much of that I actually implemented and most cases should be just 'any'
.
For the data, you can use a plain format pg_dump sql file of the needed tables, or you can create the table and do inserts or table copies.
pgRouting aims to follow this Git branching model: http://nvie.com/posts/a-successful-git-branching-model
The main branches:
- master
- develop
- pgr-1.x
Supporting branches with limited life time:
- Feature branches
- Release branches
- Hotfix branches