mercredi 31 octobre 2018

SRS barn raising: 5th report

This is the fifth progress report of the GDAL SRS barn effort. A lot of activity in PROJ developments has been done this month again.

New conversion and transformation methods referenced in the EPSG database have been integrated:
  • Projection methods: Lambert Conic Conformal (2SP Michigan) and Laborde Oblique Mercator
  • "Change of Vertical Unit", "Vertical offset" and other fixes relative to vertical component handling
  • "Axis order reversal"
  • "Affine parametric transformation", and its implementation in PROJ computation core
  • "Molodensky-Badekas" transformation, and its implementation in PROJ computation core
Regarding database-related activities:
  • Make concatenate operation building from proj.db robust to inverse sub-operations (the EPSG database lists chained operations, but does not indicate if the forward or reverse path must be taken)
  • Reference transformation grids available in the proj-datumgrid-* packages, such as NAD83 -> NAD83(HPGN) grids, OSTN15_NTv2_OSGBtoETRS.gsb
  • Add a celestial_body table and celestial_body property to ellipsoid, as a provision to handle non-Earth bodies
  • Add a text_definition column to geodetic_crs and projected_crs to allow definition by PROJ string (or WKT)
  • Add  the possibility of defining in 'other_transformation' a transformation defined by a PROJ pipeline
  • The createOperations() method that returns transformations between two CRS is now able to find pivot CRS when no direct transformation path exists. One notable fact to underline is that the pivot is not necessarily WGS84, and several candidate pivots are explored. When transforming from CRS A to CRS B, the database will be searched for all CRS C for which there is a referenced transformation from/to both CRS A and CRS B (advanced users may also be able to restrict the candidate(s) pivot to use)
  • Import PROJ.4 definitions contained currently data/IGNF into the database in a relational form. A script to directly use the official registry as the source, instead of the PROJ.4 strings that derive from it, has also been developed but is not used yet.
  • Import of the CRS definitions and transformations between horizontal CRS from the CSV files of the published Esri Projection Engine Database Documentation. The projected CRS are imported with their ESRI WKT definition directly put in the database, so ongoing work is in progress to be able to ingest this WKT1 variant in the WKT2-based form used internally by PROJ.
  • Update the CRS and transformations to EPSG dataset v9.5.4
The addition of the DerivedEngineeringCRS, DerivedParametricCRS and DerivedTemporalCRS classes mark the completion of the implementation of the full ISO-19111:2018 CRS modelling

A rather tedious task has consisted in optimizing the code (mostly by avoiding a lot of unnecessary instanciation of temporary C++ objects, involving looking frequently at disassembled output to locate them) to make the generated binary lighter. This has enabled an optimized build to go down from 3.1 to 2.6 MB

The Doxygen-generated documentation of the C++ API has been integrated with the general PROJ documentation in ReST format, with the Breathe module.

An initial C API that makes part of the C++ functionality available to C users has also been added.

Finally, a RFC has been written to officially submit this work for PROJ PSC approval (doubling the size of a code base and changing its language requirements is not a mundane detail). This RFC has now been officially adopted, and will make it possible to ultimately merge this work into PROJ master. You may skim through it to look at a few examples of the use of the projinfo utility that demonstrates part of the new capabilities developed.

vendredi 28 septembre 2018

SRS barn raising: 4th report

This is the fourth progress report of the GDAL SRS barn effort.

The in-progess task at the time of the last report was the addition of a method createFromPROJString() that takes a PROJ string and builds ISO-19111 corresponding objects. It has now been completed for ProjectedCRS. Not all arbitrary valid PROJ strings can be currently mapped to ISO-19111 objects, but at least strings expressing CRS definitions can. A few constructs involving Helmert transforms are also parsed as CoordinateOperation.

A few random tasks completed:
  • Map time-dependent Helmert transforms, and fixes for existing Helmert transforms
  • Map Molodensky and Abridged Molodensky transformation methods from/to PROJ strings
  • Map Longitude rotation transformation method
  • Update to recognize VRF and TRF WKT2-2018 keywords
  • Add import/export of VERTICALEXTENT and TIMEEXTENT WKT2 elements
  • Add import/export of WKT2:2018 USAGE node
  • Progress in implementation of "esoteric" ISO-19111 classes: DatumEnsemble, DynamicVerticalReferenceFrame, OrdinalCS, DerivedProjectedCRS, EngineeringCRS, ParametricCRS, DerivedVerticalCRS
  • Several fixes

The most interesting task regarding the high-level objectives of the roadmap of the barn campaign is the creation of a proj.db SQLite3 database containing CRS (and related objects) and coordinate operations (datum shifts, ...) definitions. The workflow to build it is:
  1. Import EPSG dataset PostgreSQL .sql dumps
  2. Run scripts/build_db.py that will ingest those dumps in a temporary SQLite3 database and then extract needed information from it and marshall it a more digestable form for our final proj.db. At the end the script, outputs new .sql scripts in the data/sql subdirectory of the PROJ directory. We keep in version control those text files, for better tracability of changes.
  3. At make time, we build the proj.db database by importing those .sql scripts.
Steps 1 and 2 are done (typically by PROJ developers) each time you need to update to a newer version.
Step 3 is done automatically at PROJ build time, from a git clone or a tarball of an official release.

The proj.db structure allows for multiple authorities. Instead of having a single code column to identify and reference objects, we use a tuple (authority_name, code) as a key, both columns being of text type for better generality. So ('EPSG', '4326') or ('IGNF', 'LAMB1') are possible. At that time, only EPSG derived objects are in the database. Import of other dictionaries are task for later.

Having a database is good, but using it is better. So the next task was to implement a factory class able to build an object in our ISO-19111 modelling from its (auth_name, code). This is now completed for all object categories of the database. The in-progress task is the generalization/augmentation of the current method createOperation(sourceCRS, targetCRS) that creates coordinate method without external input than the definition of the CRS as a createOperations(sourceCRS, targetCRS, context) that also uses the proj.db database to find registered coordinate operations (typically between the geographic CRS), and take into account specified area of interest and desired accuracy, to propose a list of candidate coordinate operations (chaining for example the reverse projection, a datum shift, and a forward projection). I'm also working on a new `projinfo`utility that will be similar in purpose to the `gdalsrsinfo`, offering the possibility to ingest PROJ strings (legacy PROJ.4 format and new PROJ pipelines), WKT 1, WKT 2, authority codes and output as PROJ (legacy and new), WKT 1, WKT2:2015, WKT2:2018. A mode to list coordinate operations possible between two CRS will also be available.

One interesting statistics: the number of lines of C++ code (including blank lines and comments) added to PROJ per this work is now greater than the number of historical C code: 47 000 lines (14 K being tests) vs 43 000.

mercredi 29 août 2018

SRS barn raising: 3rd report

This is the third progress report of the GDAL SRS barn effort.

In the last month, the main task was to continue, and finish, the mapping of all GDAL currently supported projection methods between their different representations as WKT 2, WKT 1 and PROJ string: LCC_2SP, LCC_2SP_Belgium, Modified Azimuthal Equidistant, Guam Projection, Bonne, (Lambert) Cylindrical Equal Area,    GaussSchreiberTransverseMercator, CassiniSoldner, EckertI to VI, EquidistantCylindricalSpherical, Gall, GoodeHomolosine, InterruptedGoodeHomolosine, GeostationarySatelliteSweepX/Y, International Map of the World Polyconic, Krovak North Oriented and Krovak, LAEA, Mercator_1SP and Mercator_2SP, WebMercator (including GDAL WKT1 import/export tricks), Mollweide, ObliqueStereographic and Orthographic, American polyconic, PolarSterographicVariantA and PolarSterographicVariantB, Robinson and Sinusoidal, Stereographic, VanDerGrinten, Wagner I to VII, QuadrilateralizedSphericalCube, SphericalCrossTrackHeight, Aitoff, Winkel I and II, Winkel Tripel, Craster_Parabolic, Loximuthal and Quartic_Authalic

The task was tedious, but necessary.  For some cases, this involved cross-checking formulas in the EPSG "Guidance Note 7, part 2 Coordinate Conversions & Transformations including Formulas", PROJ implementation and Snyder "Map Projections - A Working Manual" because of ambiguities in some projection names. Typically the ObliqueStereographic in EPSG is not the Oblique Stereographic of Snyder. The former is implemented as the Oblique Stereographic Alternative (sterea) in PROJ, and the later as the Oblique Stereographic (stere). The parameter names in WKT 2 / EPSG tend also to be much more specific that in GDAL WKT 1. When in GDAL WKT1, you have mostly a "latitude_of_origin" parameter mapping to the lat_0 PROJ parameter, in WKT2, parameter names tend to better reflect the mathematical characteristics of the projection, distinguishing between "Latitude of natural origin", "Latitude of projection centre" or "Latitude of false origin"

The currently ongoing task is now to implement a method that takes a PROJ string and builds ISO-19111 corresponding objects. Done for GeographicCRS (+proj=longlat), in progress for Projected CRS. When this will be completed we will have the infrastructure to convert in all directions between PROJ strings, WKT 1 and WKT 2

When digging into PROJ code, I also uncovered a number of issues in the Helmert implementation (confusing for rotational parameters regarding the "Position Vector" vs "Coordinate frame" convention), the handling of the not-so-well-known +geoc flag for geocentric latitudes and the handling of vertical units for geographic CRS with the new PROJ API. All of those fixes have been independantly merged in PROJ master, so as to be available for the upcoming PROJ 5.2.0, which should be released around mid-september (to remove any confusion, this release will not include yet all the WKT 2 related work)

jeudi 26 juillet 2018

SRS barn raising: 2nd report

This is the second progress report of the GDAL SRS barn effort.

Since the previous report, a number of topics have been addressed:
- extension of the class hierarchy to implement BoundCRS (the generalization of the WKT1 TOWGS84 concept. This concept only exists in WKT 2 and has not been modeled in ISO-19111, so I went on my own modelling), TimeCRS, DerivedGeodeticCRS
- implementation of the exportToPROJ() method for CRS and related objects, and CoordinateOperation
- addition of all documentation needed at class and method level so that Doxygen passes without warnings
- implementation of CoordinateOperation::createOperation() method that can instanciate a transformation between two CRSs. For now, it does not use yet the operations of the EPSG database, but it can already take into account the hints of BoundCRS.
- implementation of a number of Transformations: geocentric translation, position vector transformation, coordinate frame rotation, NTv2,  GravityRelatedHeightToGeographic3D, VERTCON.
- start of mapping all GDAL currently supported projection methods. For now: UTM, Transverse Mercator, Transerve Mercator South Oriented, Two Point Equidistant, Tunisia Mapping Grid, Albers Conic Equal Area, Lambert Conic Conformal 1SP, New Zealand Map Grid. Several tens are obviously still missing.
- addition of a isEquivalentTo() method to compare the various objects.
- and of course, extensive unit testing of all the above work.

The result of this continued work can be followed in this pull request.

As a related effort, I've coordinated with the OGC CRS Working Group to provide my comments on the upcoming ISO:19168 / WKTv2 2018 revision.

lundi 18 juin 2018

The barn is raising

Thanks to the support given by the sponsors of the GDAL SRS barn effort, I have been able to kick in the first works in the past weeks. The work up to now has been concentrated on the PROJ front.

The first step was to set a foundation of C++ classes that implement the ISO-19111 / OGC Topic 2 "Referencing by coordinates" standard. Actually I have anticipated the future adoption of the 18-005r1 2018 revision of the standard that takes into account the latest advances in the modelling of coordinate reference systems (in particular dynamic reference frames, geoid-based vertical coordinate reference systems, etc.), which will be reflected in the corresponding update of the WKT2:2018 standard and future updates of the EPSG dataset. If you are curious, you can skim through the resulting PROJ class hierarchy which is really close to the abstract specification (a number of those classes currenty lack a real implementation for now). With the agreement of the newly born PROJ project steering committee, I have opted for C++11 which offers a number of useful modern features to reduce boilerplate and concentrate on the interesting aspects of the work.

On the functional front, there is already support to read WKT1 (its GDAL variant for now) and WKT2 strings and build a subset of the before mentionned C++ objects. And conversely to dump those C++ objects as WKT1 and WKT2 strings. In particular you can import from WKT1 and export to WKT2, or the reverse (within the limitations of each format). So this abstract modelling (quite close to WKT2 of course) effectively serves its purpose to help being independant from the actual representation of the CRS. As I mentionned an early adoption of the OGC Topic 2 standard, similarly I've taken into account the future WKT2:2018 (OGC 18-010) standard that aligns well with the abstract specification. In the API, the user can select if he wants to export according to the currently adopted version WKT2:2015 (OGC 12-063r5), or with the future WKT2:2018 revision.

The result of those first steps can be followed in this pull request.

Another task that has been accomplished is the addition of the Google Test C++ testing framework to PROJ (thanks to Mateusz Loskot for his help with the CMake integration), so all those new features can be correctly tested locally and on all platforms supported by PROJ continuous integration setup.

There are many future steps to do just on the PROJ front :
  • implement remaining classes
  • code documentation
  • comprehensive support of projection methods (at least the set currently supported by GDAL)
  • import from and export to PROJ strings for CRS definitions and coordinate operations
  • use of the EPSG dataset

vendredi 15 juin 2018

SQLite and POSIX advisory locks

Those past couple days, I was working on implementing multi-layer transaction support for GeoPackage datasources (for QGIS 3.4). Multi-layer transaction is an advanced functionality of QGIS (you have to enable it in project settings), initially implemented for PostgreSQL connections where several layers can be edited together so as to have atomic modifications when editing them. Modifications are automatically sent to the database, using SQL savepoints to implement undo/redo operations, instead of being queued in memory and committed at once when the user stops editing  the layer.

While debugging my work during development, I stumbled upon a heisenbug. From time to time, the two auxiliary files attached to a SQLite database opened in Write Ahead Logging (WAL) mode, suffixed -wal and -shm, would suddenly disappear, whereas the file was still being opened by QGIS. As those files are absolutely required, the consequence of this was that following operations on the database failed: new readers (in the QGIS process) would be denied opening the file, and QGIS could not commit any new change to it. When the file was closed, the file returned again in a proper state (which shows the robustness of sqlite). After some time, I realized that my issue arised exactly when I observed the database being edited by QGIS with an external ogrinfo on it (another way to reproduce the issue would be to open a second QGIS instance on the same file and close it). I indeed used ogrinfo to check that the state of the database was consistent during the editing operations. Okay, so instead of a random bug, I had now a perfectly reproducable bug. Half of the way to solve it, right ?

How come ogrinfo, which involves read-only operations, could cause those -wal and -shm files to disappear ? I had some recollection of code I had written in the OGR SQLite driver regarding this. When a dataset opened in read-only mode is closed by OGR, it checks if there's a -wal file still existing (which could happen if a database had not been cleanly closed, like a killed process), and if so, it re-opens it temporarily in update mode, does a dummy operation on it, and close it. If the ogrinfo process is the only one that had a connection on the database, libsqlite would remove the -wal and -shm files automatically (OGR does not directly remove the file, it relies on libsqlite wisdom to determine if they can be removed or not). But wait, in my above situation, ogrinfo was not the exclusive process operating on the database: QGIS was still editing it.... Would that be a bug in the venerable libsqlite ??? (spoiler: no)

I tried to reproduce the situation with replacing QGIS by a plain sqlite console opening the file, and doing a ogrinfo on it. No accidental removal of the -wal and -shm files. Okay, so what is the difference between QGIS and the sqlite console (beside QGIS having like one million extra line of code;-)). Well, QGIS doesn't directly use libsqlite3 to open GeoPackage databases, but uses the OGR GPKG driver. So instead of opening with QGIS or a sqlite3 console, what if I opened with the OGR GPKG driver ? Bingo, in that situation, I could also reproduce the issue. So something in OGR was the culprit. I will save you of the other details, but at the end it turned out that if OGR was opening itself a .gpkg file using standard file API, whereas libsqlite3 was opening it, chaos would result. This situation can happen since for example when opening a dataset, OGR has to open the underlying file to at least read its header and figure out which driver would handle it. So the sequence of operation is normally:
1) the GDALOpenInfo class opens the file
2) the OGR GeoPackage driver realizes this file is for it, and use the sqlite3_open() API to open it
3) the GDALOpenInfo class closes the file it has opened in step 1 (libsqlite3 still manages its own file handle)

When modifying the above sequence, so that 3) is executed before 2), the bug would not appear. At that point, I had some recollection that sqlite3 used POSIX advisory locks to handle concurrent accesses, and that there were some issues with that POSIX API. Digging into the sqlite3.c source code revealed a very interesting 86 line long comment about how POSIX advisory locks are broken by design. The main brokenness are they are advisory and not compulsory of course, but as this is indicated in the name, one cannot really complain about that being a hidden feature. The most interesting finding was: """If you close a file descriptor that points to a file that has locks, all locks on that file that are owned by the current process are released.""" Bingo: that was just what OGR was doing.
My above workaround (to make sure the file is closed before sqlite opens it and set its locks) was OK for a single opening of a file in a process. But what if the user wants to open a second connection on the same file (which arises easily in the QGIS context) ? The rather ugly solution I came off was that the OGR GPKG driver would warn the GDALOpenInfo not to try to open a given file while it was still opened by the driver and pass it the file header it would be supposed to find if it could open the file, so that the driver identification logic can still work. Those fixes are queued for GDAL 2.3.1, whose release candidate is planned next Friday.

Conclusions:
  • never ever open (actually close) a SQLite database with regular file API while libsqlite3 is operating on it (in the same process)
  • POSIX advisory locks are awful.

jeudi 12 octobre 2017

Optimizing JPEG2000 decoding

Over this summer I have spent 40 days (*) in the guts of the OpenJPEG open-source library (BSD 2-clause licensed) optimizing the decoding speed and memory consumption. The result of this work is now available in the OpenJPEG 2.3.0 release.

For those who are not familiar with JPEG-2000, and they have a lot of excuse given its complexity, this is a standard for image compression, that supports lossless and lossy methods. It uses discrete wavelet transform for multi-resolution analysis, and a context-driven binary arithmetic coder for encoding of bit plane coefficients. When you go into the depths of the format, what is striking is the number of independent variables that can be tuned:

- use of tiling or not, and tile dimensions
- number of resolutions
- number of quality layers
- code-block dimensions
- 6 independent options regarding how code-blocks are encoded (code-block styles): use of Selective arithmetic coding bypass, use of Reset context probabilities on coding pass boundaries, use of Termination on each coding pass, use of Vertically stripe causal context, use of Predictable termination, use of Segmentation Symbols. Some can bring decoding speed advantages (notably selective arithmetic coding bypass), at the price of less compression efficiency. Others might help hardware based implementations. Others can help detecting corruption in the codestream (predictable termination)
- spatial partition of code-blocks into so-called precincts, whose dimension may vary per resolution
- progression order, ie the criterion to decide how packets are ordered, which is a permutation of the 4 variables: Precincts, Component, Resolution, Layer. The standard allows for 5 different permutations. To add extra fun, the progression order might be configured to change several times among the 5 possible (something I haven't yet had the opportunity to really understand)
- division of packets into tile-parts
- use of multi-component transform or not
- choice of lossless or lossy wavelet transforms
- use of start of packet / end of packet markers
- use of  Region Of Interest, to have higher quality in some areas
- choice of image origin and tiling origins with respect to a reference grid (the image and tile origin are not necessarily pixel (0,0))

And if that was not enough, some/most of those parameters may vary per-tile! If you already found that TIFF/GeoTIFF had too many parameters to tune (tiling or not, pixel or band interleaving, compression method), JPEG-2000 is probably one or two orders of magnitude more complex. JPEG-2000 is truly a technological and mathematical jewel. But needless to say that having a compliant JPEG-2000 encoder/decoder, which OpenJPEG is (it is an official reference implementation of the standard) is already something complex. Having it perform optimally is yet another target.

Previously to that latest optimization round, I had already worked at enabling multi-threaded decoding at the code-block level, since they can be decoded independently (once you've re-assembled from the code-stream the bytes that encode a code-block), and in the inverse wavelet transform as well (during the horizontal pass, resp vertical pass, rows, resp columns, can be transformed independently). But the single-thread use had yet to be improved. Roughly, around 80 to 90% of the time during JPEG-2000 decoding is spent in the context-driven binary arithmetic decoder, around 10% in the inverse wavelet transform and the rest in other operations such as multi-component transform. I managed to get around 10% improvement in the global decompression time by porting to the decoder an optimization that had been proposed by Carl Hetherington for the encoding side, in the code that determines which bit of wavelet transformed coefficient must be encoded during which coding pass. The trick here was to reduce the memory needed for the context flags, so as to decrease the pressure on the CPU cache. Other optimizations in that area have consisted in making sure that some critical variables are kept preferably in CPU registers rather than in memory. I've spent a good deal of time looking at the disassembly of the compiled code.
I've also optimized the reversible (lossless) inverse transform to use the Intel SSE2 (or AVX2) instruction sets to be able to process several rows, which can result up to 3x speed-up for that stage (so a global 3% improvement)

I've also worked on reducing the memory consumption needed to decode images, by removing the use of intermediate buffers when possible. The result is that the amount of memory needed to do full-image decoding was reduced by 2.4.

Another major work direction was to optimize speed and memory consumption for sub-window decoding. Up to now, the minimal unit of decompression was a tile. Which is OK for tiles of reasonable dimensions (let's say 1024x1024 pixels), but definitely not on images that don't use tiling, and that hardly fit into memory. In particular, OpenJPEG couldn't open images of more than 4 billion pixels. The work has consisted in 3 steps :
- identifying which precincts and code-blocks are needed for the reconstruction of a spatial region
- optimize the inverse wavelet transform to operate only on rows and columns needed
- reducing the allocation of buffers to the amount strictly needed for the subwindow of interest
The overall result is that the decoding time and memory consumption are now roughly proportional to the size of the subwindow to decode, whereas they were previously constant. For example decoding 256x256 pixels in a 13498x9944x3 bands image takes now only 190 ms, versus about 40 seconds before.

As a side activity, I've also fixed 2 different annoying bugs that could cause lossless encoding to not be lossless for some combinations of tile sizes and number of resolutions, or when some code-block style options were used.

I've just updated the GDAL OpenJPEG driver (in GDAL trunk) to be more efficient when dealing with untiled JPEG-2000 images.

There are many more things that could be done in the OpenJPEG library :
- port a number of optimizations on the encoding side: multi-threadig, discrete wavelet transform optimizations, etc...
- on the decoding side, reduce again the memory consumption, particularly in the untiled case. Currently we need to ingest into memory the whole codestream for a tile (so the whole compressed file, on a untiled image)
- linked to the above, use of TLM and PLT marker segments (kind of indexes to tiles and packets)
- on the decoding side, investigate further improvements for the code specific of irreversible / lossy compression
- make the opj_decompress utility do a better use of the API and consume less memory. Currently it decodes a full image into memory instead of proceeding by chunks (you won't have this issue if using gdal_translate)
- investigate how using GPGPU capabilities (CUDA or OpenCL) could help reduce the time spent in context-driven binary arithmetic decoder.

Contact me if you are interested in some of those items (or others !)




(*) funding provided by academic institutions and archival organizations, namely
… And logistic support from the International Image Interoperability Framework (IIIF), the Council on Library and Information Resources (CLIR), intoPIX, and of course the Image and Signal Processing Group (ISPGroup) from University of Louvain (UCL, Belgium) hosting the OpenJPEG project.