jeudi 12 octobre 2017

Optimizing JPEG2000 decoding

Over this summer I have spent 40 days (*) in the guts of the OpenJPEG open-source library (BSD 2-clause licensed) optimizing the decoding speed and memory consumption. The result of this work is now available in the OpenJPEG 2.3.0 release.

For those who are not familiar with JPEG-2000, and they have a lot of excuse given its complexity, this is a standard for image compression, that supports lossless and lossy methods. It uses discrete wavelet transform for multi-resolution analysis, and a context-driven binary arithmetic coder for encoding of bit plane coefficients. When you go into the depths of the format, what is striking is the number of independent variables that can be tuned:

- use of tiling or not, and tile dimensions
- number of resolutions
- number of quality layers
- code-block dimensions
- 6 independent options regarding how code-blocks are encoded (code-block styles): use of Selective arithmetic coding bypass, use of Reset context probabilities on coding pass boundaries, use of Termination on each coding pass, use of Vertically stripe causal context, use of Predictable termination, use of Segmentation Symbols. Some can bring decoding speed advantages (notably selective arithmetic coding bypass), at the price of less compression efficiency. Others might help hardware based implementations. Others can help detecting corruption in the codestream (predictable termination)
- spatial partition of code-blocks into so-called precincts, whose dimension may vary per resolution
- progression order, ie the criterion to decide how packets are ordered, which is a permutation of the 4 variables: Precincts, Component, Resolution, Layer. The standard allows for 5 different permutations. To add extra fun, the progression order might be configured to change several times among the 5 possible (something I haven't yet had the opportunity to really understand)
- division of packets into tile-parts
- use of multi-component transform or not
- choice of lossless or lossy wavelet transforms
- use of start of packet / end of packet markers
- use of  Region Of Interest, to have higher quality in some areas
- choice of image origin and tiling origins with respect to a reference grid (the image and tile origin are not necessarily pixel (0,0))

And if that was not enough, some/most of those parameters may vary per-tile! If you already found that TIFF/GeoTIFF had too many parameters to tune (tiling or not, pixel or band interleaving, compression method), JPEG-2000 is probably one or two orders of magnitude more complex. JPEG-2000 is truly a technological and mathematical jewel. But needless to say that having a compliant JPEG-2000 encoder/decoder, which OpenJPEG is (it is an official reference implementation of the standard) is already something complex. Having it perform optimally is yet another target.

Previously to that latest optimization round, I had already worked at enabling multi-threaded decoding at the code-block level, since they can be decoded independently (once you've re-assembled from the code-stream the bytes that encode a code-block), and in the inverse wavelet transform as well (during the horizontal pass, resp vertical pass, rows, resp columns, can be transformed independently). But the single-thread use had yet to be improved. Roughly, around 80 to 90% of the time during JPEG-2000 decoding is spent in the context-driven binary arithmetic decoder, around 10% in the inverse wavelet transform and the rest in other operations such as multi-component transform. I managed to get around 10% improvement in the global decompression time by porting to the decoder an optimization that had been proposed by Carl Hetherington for the encoding side, in the code that determines which bit of wavelet transformed coefficient must be encoded during which coding pass. The trick here was to reduce the memory needed for the context flags, so as to decrease the pressure on the CPU cache. Other optimizations in that area have consisted in making sure that some critical variables are kept preferably in CPU registers rather than in memory. I've spent a good deal of time looking at the disassembly of the compiled code.
I've also optimized the reversible (lossless) inverse transform to use the Intel SSE2 (or AVX2) instruction sets to be able to process several rows, which can result up to 3x speed-up for that stage (so a global 3% improvement)

I've also worked on reducing the memory consumption needed to decode images, by removing the use of intermediate buffers when possible. The result is that the amount of memory needed to do full-image decoding was reduced by 2.4.

Another major work direction was to optimize speed and memory consumption for sub-window decoding. Up to now, the minimal unit of decompression was a tile. Which is OK for tiles of reasonable dimensions (let's say 1024x1024 pixels), but definitely not on images that don't use tiling, and that hardly fit into memory. In particular, OpenJPEG couldn't open images of more than 4 billion pixels. The work has consisted in 3 steps :
- identifying which precincts and code-blocks are needed for the reconstruction of a spatial region
- optimize the inverse wavelet transform to operate only on rows and columns needed
- reducing the allocation of buffers to the amount strictly needed for the subwindow of interest
The overall result is that the decoding time and memory consumption are now roughly proportional to the size of the subwindow to decode, whereas they were previously constant. For example decoding 256x256 pixels in a 13498x9944x3 bands image takes now only 190 ms, versus about 40 seconds before.

As a side activity, I've also fixed 2 different annoying bugs that could cause lossless encoding to not be lossless for some combinations of tile sizes and number of resolutions, or when some code-block style options were used.

I've just updated the GDAL OpenJPEG driver (in GDAL trunk) to be more efficient when dealing with untiled JPEG-2000 images.

There are many more things that could be done in the OpenJPEG library :
- port a number of optimizations on the encoding side: multi-threadig, discrete wavelet transform optimizations, etc...
- on the decoding side, reduce again the memory consumption, particularly in the untiled case. Currently we need to ingest into memory the whole codestream for a tile (so the whole compressed file, on a untiled image)
- linked to the above, use of TLM and PLT marker segments (kind of indexes to tiles and packets)
- on the decoding side, investigate further improvements for the code specific of irreversible / lossy compression
- make the opj_decompress utility do a better use of the API and consume less memory. Currently it decodes a full image into memory instead of proceeding by chunks (you won't have this issue if using gdal_translate)
- investigate how using GPGPU capabilities (CUDA or OpenCL) could help reduce the time spent in context-driven binary arithmetic decoder.

Contact me if you are interested in some of those items (or others !)




(*) funding provided by academic institutions and archival organizations, namely
… And logistic support from the International Image Interoperability Framework (IIIF), the Council on Library and Information Resources (CLIR), intoPIX, and of course the Image and Signal Processing Group (ISPGroup) from University of Louvain (UCL, Belgium) hosting the OpenJPEG project.

mercredi 11 octobre 2017

GDAL and cloud storage

In the past weeks, a number of improvements related to access to cloud storage have been committed to GDAL trunk (future GDAL 2.3.0)

Cloud based virtual file systems


There was already support to access private data in Amazon S3 buckets through the /vsis3/ virtual file system (VFS). Besides a few robustness fixes, a few new capabilities have been added, like creation and deletion of directories inside a bucket with VSIMkdir() / VSIRmdir(). The authentication methods have also been extended to support, beyond the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables, the other ways accepted by the "aws" command line utilities, that is to say storing credentials in the ~/.aws/credentials or ~/.aws/config files. If GDAL is executed since a Amazon EC2 instance that has been assigned rights to buckets, GDAL will automatically fetch the instance profile credentials.

The existing read-only /vsigs/ VFS for Google Cloud Storage as being extended with write capabilities (creation of new files), to be on feature parity with /vsis3/. The authentication methods have also been extended to support OAuth2 authentication with a refresh token, or with service account authentication. The credentials can be stored in a ~/.boto configuration file. And when run from a Google Compute Engine virtual machine, GDAL will automatically fetch the instance profile credentials.

Two new VFS have also been added, /vsiaz/ for Microsoft Azure Blobs and /vsioss/ for Alibaba Cloud Object Storage Service. They support read and write operations similarly to the two previously mentioned VFS.


To make file and directory management easy, a number of Python sample scripts have been created or improved:
gdal_cp.py my.tif /vsis3/mybucket/raster/
gdal_cp.py -r /vsis3/mybucket/raster /vsigs/somebucket
gdal_ls.py -lr /vsis3/mybucket
gdal_rm.py /vsis3/mybucket/raster/my.tif
gdal_mkdir.py /vsis3/mybucket/newdir
gdal_rmdir.py -r /vsis3/mybucket/newdir

Cloud Optimized GeoTIFF


Over the last past few months, there has been adoption by various actors of the cloud optimized formulation of GeoTIFF files, which enables clients to efficiently open and access portions of a GeoTIFF file available through HTTP GET range requests.

Source code for a online service that offers validation of cloud optimized GeoTIFF (using GDAL and the validate_cloud_optimized_geotiff.py script underneath) and can run as a AWS Lambda function is available. Note: as the current definition of what is or is not a cloud optimized formulation has been uniteraly decided up to now, it cannot be excluded that it might be changed on some points (for example relaxing constraints on the ordering of the data of each overview level, or enforcing that tiles are ordered in a top-to-bottom left-to-right way)

GDAL trunk has received improvements to speed up access to sub windows of a GeoTIFF file by making sure that the tiles that participate to a sub-window of interest are requested in parallel (this is true for public files accessed through /vsicurl/ or with the four above mentioned specialized cloud VFS), by reducing the amount of data fetched to the strict minimum and merging requests for consecutive ranges. In some environments, particularly when accessing to the storage service of a virtual machine of the same provider, HTTP/2 can be used by setting the GDAL_HTTP_VERSION=2 configuration option (provided you have a libcurl recent enough and built against nghttp2). In that case, HTTP/2 multiplexing will be used to request and retrieve data on the same HTTP connection (saving time to establish TLS for example). Otherwise, GDAL will default to several parallel HTTP/1.1 connections. For long lived processes, efforts have been made to re-use as much as possible existing HTTP connections.

samedi 11 mars 2017

Dealing with huge vector GeoPackage databases in GDAL/OGR

Recently, I've fixed a bug in the OGR OpenFileGDB driver, the driver made from the reverse engineering the ESRI FileGeoDatabase format, so as to be able to read tables whose section that enumerates and describes fields is located beyond the first 4 GB of the file. This table from the 2016 TIGER database is indeed featuring all linear edges of the USA and is 15 GB large (feature and spatial indexes included), with 85 million features.

Some time before, I had to deal with a smaller database - 1.7 GB as GeoPackage - with 5.4 million polygons (bounding box) from the cadastre of an Italian province. One issue I noticed is that when you want to get the summary of the layer, with ogrinfo -al -so the.gpkg, it was very slow. The reason is that this summary includes the feature count, and there's no way to get this metadata quickly, apart from running the "SELECT COUNT(*) FROM the_table" request, which causes a full scan of the table. For small databases, this runs fast, but when going into the gigabyte realm, this can take several dozains of seconds. But getting the spatial extent of the layer, which is one of the other information displayed by the summary mode of ogrinfo, is fast since the gpkg_contents "system" table of a GeoPackage database includes the bounding box of the table. So my idea was to extend the definition of the gpkg_contents table to add a new column, ogr_feature_count, to store the feature count. I went to implement that, and it worked fine. The synchronization of the value of ogr_feature_count after edits can be done with 2 SQLite triggers, on row insertion and deletion, and that  works with implementations that are not aware of the existence of this new column. Like older OGR versions. Unfortunately it appears that at least one other implementation completely rejected such databases. There is some inconsistency in the GeoPackage specification if additional columns are accepted or not in system tables. From the /base/core/contents/data/table_def test case, "Column order, check constraint and trigger definitions, and other column definitions in the returned sql are irrelevant.", it would seem that additional columns should still be considered as a valid GeoPackage. Anyway, that's only the theory and we don't want to break interoperability for just a nice-to-have feature... So I went to change the design a bit and created a new table, gpkg_ogr_contents, with a table_name and feature_count columns. I'm aware that I should not borrow the gpkg_ prefix, but I felt it was safer to do so since other implementations will probably ignore any unknown gpkg_ prefixed table. And the addition of the ogr_ prefix makes collisions with future extension of the GeoPackage specification unlikely. The content of this table is also maintained in synchronization with the data table thanks to two triggers, and this makes the other software that rejected my first attempt happy. Problem solved.

Let's come back to our 13 GB FileGeoDatabase. My first attempt to convert is to GeoPackage with ogr2ogr resulted in converting the features in about half an hour, but once this 100% stage reached, the finalization, which includes building the spatial index took ages. So long, that after a whole night it wasn't yet finished and seriously making the computer non responsive due to massive I/O activity. In the GeoPackage driver, the spatial index is indeed created after feature insertion, so that the feature table and spatial index tables are well separated in the file, and from previous experiment with the Spatialite driver, it proved to be the right strategy. Populating the SQLite R-Tree is done with a simple statement: INSERT INTO my_rtree SELECT fid, ST_MinX(geom), ST_MaxX(geom), ST_MinY(geom), ST_MaxY(geom) FROM the_table. Analyzing what happens in the SQLite code is not easy when you are not familiar with that code base, but my intuition is that there was constant back and forth between the geometry data area and the RTree area in the file, making the SQLite page cache inefficient. So I decided to experiment with a more progressive approach. Let's iterate over the feature table and collect the fid, minx, max, miny, maxy by chunks of 100 000 rows, and the insert those 100 000 bounding boxes into the R-Tree, and loop again unil we have completely read the feature table. With such a strategy, the spatial index can now be built in 4h30. The resulting GeoPackage file weights 31.6 GB, so twice at large than the FileGeoDatabase. One of the reasons for the difference must be due to geometries in FileGeoDatabase being compressed (quantization for coordinate precision, delta encoding and use of variable integer) whereas GeoPackage uses a uncompressed SQLite BLOB based on OGC WKB.
My first attempt at opening it in QGIS resulted in the UI to be frozen, probably for hours. The reason is that QGIS always issues a spatial filter, even when requesting on a area of interest that is at least as large as the extent of the layer, where there is no performance gain to expect from using it. So the first optimization was in the OGR GeoPackage to detect that situation and to not translate the OGR spatial filter as SQLite R-Tree filter. QGIS could now open the database and progressively displays the features. Unfortunately when zooming in, the UI became frozen again. When applying a spatial filter, the GeoPackage driver created a SQL request like the following one:

SELECT * FROM the_table WHERE fid IN 
       (SELECT id FROM the_rtree WHERE 
        xmin <= bbox_xmax AND xmax >= bbox_xmin AND
        ymin <= bboy_ymay AND ymay >= bboy_ymin)

It turns out that the sub-select (the one that fetches the feature ID from the spatial index) is apparently entirely run before the outer select (the one that returns geometry and attributes) starts being evaluated. This way of expressing the spatial filter came from the Spatialite driver (since GeoPackage and Spatialite use the exact same mechanisms for spatial indexing), itself based on examples from an old Spatialite tutorial. For not too big databases, this runs well. After some experiment, it turns out that doing a JOIN between the feature table and the RTree virtual table makes it possible to have a non blocking request:

SELECT * FROM the_table t JOIN the_rtree r ON t.fid = r.id
WHERE r.xmin <= bbox_xmax AND r.xmax >= bbox_xmin AND
      r.ymin <= bboy_ymax AND r.ymax >= bboy_ymin

Now QGIS is completely responsive, although I find that even on high zoom levels, the performance is somehow disappointing, ie features appear rather slowly. There seems to be some threshold effect on the size of the database, since the performance is rather good on the Italian province cadastral use case.

Another experiment showed that increasing the SQLite page size from 1024 bytes (the default in SQLite 3.11 or earlier) to 4096 bytes (the default since SQLite 3.12) decreases the database size to 28.8 GB. This new page size of 4096 bytes is now used by default by the OGR SQLite and GPKG drivers (unless OGR_SQLITE_PRAGMA=page_size=xxxx is specified as a configuration option).

I also discovered that increasing the SQLite page cache from its 2 MB default to 2 GB (with --config OGR_SQLITE_CACHE 2000) significantly improved the time to build the spatial index, decreasing the total conversion time from 4h30 to 2h10. 2GB is just a value selected at random. It might be too large or perhaps a larger value would help further.

All improvements mentionned above (faster spatial index creation, better use of spatial index and change of default page size) are now in GDAL trunk, and will be available in the upcoming 2.2.0 release.

lundi 19 septembre 2016

Running FreeBSD in Travis-CI

Note for geospatial focused readers: this article has little to do with geo, although it is applied to GDAL, but more with software virtualization, hacks, software archeology and the value of free software. Note for virtualization experts: I'm not one, so please bear with my approximate language and inaccuracies.

Travis-CI is a popular continuous integration platform, that can be easily used with software projects hosted at GitHub. Travis-CI has a free offer for software having public repository at GitHub. Travis-CI provides cloud instances running Linux or Mac OS X. To increase portability tests of GDAL, I wondered if it was somehow possible to run another operating system with Travis-CI, for example FreeBSD. A search lead me to this question in their bug tracker but the outcome seems to be that it is not possible, nor in their medium or long term plans.


One idea that came quickly to mind was to use the QEMU machine emulator that can simulate full machines (CPU, peripherals, memory, etc), of several hardware architectures (Intel x86, ARM, MIPS, SPARC, PowerPC, etc..). To run QEMU, you mostly need to have a virtual hard drive, i.e. a file that replicates the content of the hard disk of the virtual machine you want to run. I found here a small ready-to-use x86_64 image of FreeBSD 9.2, with one nice property: the ssh server and DHCP are automatically started, making it possible to remote connect to it.

So starting with a Travis-CI Ubuntu Trusty (14.04) image, here are the step to launch our FreeBSD guest:

sudo apt-get install qemu
wget ftp://ftp.stacklet.com/archive/x86-64/FreeBSD/9.2/\
    freebsd.9-2.x86-64.20140103.raw.img.txz
tar xJvf freebsd.9-2.x86-64.20140103.raw.img.txz
qemu-system-x86_64 -daemonize -display none \
   freebsd.9-2.x86-64.20140103.raw.img \
   -m 1536 -smp 4 -net user,hostfwd=tcp::10022-:22 -net nic

The qemu invokation starts the virtual machine as a daemon without display, turn on networking and asks for the guest (ie FreeBSD) TCP port 22 (the ssh port) to be accessible by the host (Linux Trusty) as port 10022

To ssh into the VM, there's one slight inconvenience: ssh login requires a password. The root password for this VM is "password". But ssh is secured and doesn't accept the password to be provided through files or piped in with "echo". I found that the sshpass utility was designed to overcome this in situations where security isn't really what matters. However, the version of sshpass bundled with Ubuntu Trusty didn't work with the corresponding ssh version (not surprisingly since the authors of sshpass mention that it is full of assumptions about how ssh works, that can be easily breaks with changes of ssh). I found that the latest version 1.0.6 worked however.

With 4 extra lines, we can now login into our FreeBSD instance:


wget http://fossies.org/linux/privat/sshpass-1.06.tar.gz
tar xzf sshpass-1.06.tar.gz
cd sshpass-1.06 && ./configure && make -j3 && cd ..
export MYSSH="sshpass-1.06/sshpass -p password ssh \
   -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \
    root@localhost -p10022" 

So now we can configure a bit our FreeBSD VM to install with the 'pkg' package manager a few dependencies to build GDAL:

$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg bootstrap'
$MYSSH 'mkdir /etc/pkg'
sshpass-1.06/sshpass -p password scp \
   -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \
   -P 10022 FreeBSD.conf root@localhost:/etc/pkg/FreeBSD.conf
$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg update'
$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg install gmake'
$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg install python27'
$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg install py27-numpy'
$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg install sqlite3 curl'
$MYSSH 'env ASSUME_ALWAYS_YES=YES pkg install expat'
    
Here we go: ./configure && make ! That works, but 50 minutes later (the maximum length of a Travis-CI job), our job is killed with perhaps only 10% of the GDAL code base being compiled. The reason is that we used the pure software emulation mode of QEMU that involves on-the-fly disassembling of the code to be run and re-assembling. QEMU can for example emulate a ARM guest on a Intel host, and vice-versa, and there's no really shortcuts when the guest and host architectures are the same. So your guest can typically run 10 times slower than it would on a real machine with its native architecture. Actually, that's not true, since with the addition of CPU instructions dedicated to virtualization (VT-x for Intel, AMD-V for AMD), an hypervisor called KVM (Kernel Virtual Machine) was added to the Linux kernel, and QEMU can use KVM to implement the above mentioned shortcuts to reach near bare-metal performance. It just takes to use 'kvm' instead of 'qemu-system-x86_64'. Let's do that ! Sigh, our attempt fails miserably with a "failed to initialize KVM" error message. If we display the content of /proc/cpuinfo, we get:

flags  : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc
rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1
sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm
fsgsbase bmi1 avx2 smep bmi2 erms xsaveopt

A lot of nice to have things, but the important thing to notice is the absence of the 'vmx' (Intel virtualization instruction set) and 'svm' (similar for AMD) flags. So this machine has no hardware virtualization capabilities ! Or more precisely, this *virtual* machine has no such capabilities. The documentation of the Trusty Travis-CI environment mentionned they are based on Google Computing Engine as the hypervisor, and apparently it does not allow (or is not configured to allow) nested virtualization, despite GCE being based on KVM, and KVM potentially allowing nested virtualization. GCE allows Docker to run inside VM, but Docker only runs Linux "guests". So it seems we are really stuck.

Here comes the time for good old memories and a bit of software archeology. QEMU was started by Fabrice Bellard. If you didn't know his name yet, F. Bellard created FFMPEG and QEMU, holds a world record for the number of decimals of Pi computed on a COTS PC, has ported QEMU in JavaScript to run the Linux kernel in your browser, devised BPG, a new compression based on HEVC, etc....

At the time where his interest was focused on QEMU, he created KQemu, a kernel module (for Linux, Windows, FreeBSD hosts), that could significantly enhance QEMU performance when the guest and hosts are x86/x86_64. KQemu requires QEMU to be modified to communicate with the kernel module (similarly to the working of QEMU with the KVM kernel module). KQemu started as a closed source project and was eventually released as GPL v2. One of the key feature of KQemu is that it does not require (nor use) hardware virtualization instructions. KQemu software virtualization involves complicated tricks, particularly for code in the guest that run in "Ring 0", ie with the highest priviledges, that you must patch to run as Ring 3 (non-priviledge) code in the host. You can get an idea of what is involved by reading the documentation of VirtualBox regarding software virtualization. I will not pretend that QEMU+KQemu did the exact same tricks as VirtualBox, but that should give you at least a picture of the challenges involved.  This complexity is what lead to KQemu to be eventually abandonned when CPUs with hardware virtualization became widespread to the market since KVM based virtualization is much cleaner to implement. Starting with QEMU 0.12.0, KQemu support was finally dropped from QEMU code base.

Due to KQemu not using hardware virtualization instructions, there is a good hope that it can run inside a virtualized environment. So let's have a try with QEMU 0.11.1 and KQemu 1.4.0pre. Compiling QEMU 0.11.1 on Ubuntu Trusty runs quite well, except a linking error easily fixed with this trivial patch. Building KQemu is a bit more involved, being a kernel module and the (internal) Linux kernel API being prone to changes from time to time. One good news is that the Linux specific part of kqemu is a relatively small file and the API breaks were limited to 2 aspects. The way to get the memory management structure of the current task had changed in Linux 2.6.23 and I found this simple patch to solve it. Another change that occured in a later Linux release is the removal of kernel semaphores to be replaced by mutexes. My cumulated patch to fix all compilation issues is here. I don't pretend that it is technically correct as my knowledge of kernel internals is more than limited, but a local test seemed to confirm that adding -enable-kqemu to the qemu command line worked sufficiently well to start and do things in the FreeBSD VM, and at a very decent speed. I tried the -kernel-qemu switch that turns on KQemu acceleration for kernel guest code, but that resulted in a crash of qemu near the end of the boot process of FreeBSD. Which is not surprising as kernel-qemu makes some assumptions on the internal working of the guest OS, which perhaps FreeBSD does not meet. Or perhaps this is just a bug of qemu/kqemu.

Running it on Travis-CI was successful too, with the compilation being done in 20 minutes, so probably half of the speed of bare metal, which is good enough. kqemu does not support SMP guests (but this was listed in the potential "roadmap", so probably achievable), but if we wanted to speed up compilation, we could potentially launch 2 kqemu-enabled qemu instances (the Travis-CI VM have 2 cores available) that would compile different parts of the software with the build tree being hosted in a NFS share. I said that compilation goes fine, except that the build process (actually the qemu instance) crashes at libgdal.so building time (I can also reproduce that locally). This is probably because the history of qemu & kqemu wasn't long enough to go from beta quality to production quality. I've workarounded this issue by only doing the compilation in -enable-kqemu mode, restarting the VM in pure software emulation to do the linking, and then restarting in -enable-kqemu mode. Unfortunately running the GDAL Python autotest suite in kqemu mode also leads to a qemu crash (due to the design of kqemu only runnnig code in ring 3, crashes do not affect the host), and running it completely in pure emulation mode reaches the 50 minute time-out, so for the sake of this demonstration, I only run one of the test file. And now we have our first succesful build given this build recipee.

I could also have potentially tried VirtualBox because, as mentionned above, it supports software virtualization with acceleration. But that is only for 32 bit guests (and I didn't find a ready-made FreeBSD 32bit image that you can directly ssh into). For 64 bit guests, VirtualBox require hardware virtualization to be available in the host. To the best of my knowledge, KQemu is (was) the only solution to enable acceleration of 64 bit guests without hardware requirements.

My main conclusion of this experiment is it is a striking example of a key advantage of the open source model. If kqemu had not been released as GPL v2, I would have never been able to resurrect it and modify it to run on newer kernels (actually there was also QVM86, an attempt of developing an alternative to Kqemu while Kqemu was still closed source and that was abandonned when VirtualBox was open sourced).

mardi 19 juillet 2016

Speeding up computation of raster statistics using SSE-2/AVX-2

GDAL offers a method ComputeStatistics() that given a raster band returns the minimum and maximum values of pixels, the mean value and the standard deviation.

For those not remembering how to compute mean and standard deviations, the basic formulas for values indexed from 0 to N-1 are :
mean = sum(value(i) for i = 0 to N-1) / N
std_dev = square root of the mean of the square of the differences of values to the mean
std_dev = sqrt(sum(i = 0 to N-1, (value(i) - mean)^2)) / N)
A very naive version would first compute the mean, and in a second pass compute the standard deviation.

But it can be easily proven (by expanding the (value(i) - mean)^2 term),that it is also equivalent to :
std_dev = sqrt(sum(i = 0 to N-1, value(i)^2)/N - mean^2)
std_dev = sqrt(mean_of_square_values - square_of_mean)

std_dev = sqrt(sum(i = 0 to N-1, value(i)^2)/N - (sum_of_values/N)^2)
std_dev = sqrt(N^2 *(sum(i = 0 to N-1, value(i)^2)/N - (sum_of_values/N)^2)) / N
std_dev = sqrt(N * sum_of_square_values - sum_of_values^2) / N
A less naive implementation would compute the sum of values and the sum of square values in a single pass. However the standard deviation computed like that might be subject to numeric instability given that even if the result is small, sum_of_square_values and sum_of_values can be very big for a big number of pixels, and thus if represented with floating point numbers, the difference between both terms can be wrong.

Welford algorithm

So in recent GDAL versions, the computation of the mean and standard deviation is done in a progressive and numerically stable way, thanks to the Welford algorithm

The generic code is:
pixel_counter = 0
mean = 0
M2 = 0
foreach(value in all pixels):
    if value < minimum or pixel_counter == 0: minimum = value
    if value > maximum or pixel_counter == 0: maximum = value
    pixel_counter = pixel_counter + 1
    delta = value - mean
    mean = mean + delta / pixel_counter
    M2 = M2 + delta * (value - mean);

std_dev = sqrt( M2 / pixel_counter )

Proof of Welford algorithm

(You can skip this paragraph and still follow the rest of this article)

The magic of Welford algorithm lies in the following recurrence relations.

For the mean, it is rather obvious :

N*mean(N) = sum(i = 0 to N-1, value(i))
N*mean(N) = sum(i = 0 to N-2, value(i)) + value(N-1)
N*mean(N) = (N-1) * mean(N-1) + value(N-1)
mean(N) = (N-1)/N * mean(N-1) + value(N-1)/N
mean(N) = mean(N-1) + (value(N-1) - mean(N-1)) / N

Hence mean = mean + delta / pixel_counter

For the standard deviation, the proof is a little bit more lengthy :

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - mean(N))^2 )

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - (mean(N-1) + (value(N-1) - mean(N-1)) / N))^2 )

N*stddev(N)^2 = sum(i=0 to N-1, ((value(i) - mean(N-1)) - ((value(N-1) - mean(N-1)) / N))^2 )

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - mean(N-1))^2 + ((value(N-1) - mean(N-1)) / N)^2
             - 2 * (value(i) - mean(N-1))*((value(N-1) - mean(N-1)) / N)  )

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - mean(N-1))^2) + N * ((value(N-1) - mean(N-1)) / N)^2
              - 2 * sum(i=0 to N-1, (value(i) - mean(N-1)))*((value(N-1) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 +  (value(N-1) - mean(N-1)) ^2
                    +  N * ((value(N-1) - mean(N-1)) / N)^2
              - 2 * sum(i=0 to N-1, (value(i) - mean(N-1)))*((value(N-1) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1))^2 * (1 + 1 / N)
              - 2 * N( mean(N) - mean(N-1)) *((value(N-1) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) *
            ((1 + 1 / N) *  (value(N-1) - mean(N-1)) - 2 * N( mean(N) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) *
            ((value(N-1) - mean(N-1) + (value(N-1) - mean(N-1) / N - 2 * N( mean(N) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) *
            ((value(N-1) - mean(N-1) - (mean(N) - mean(N-1))))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) * (value(N-1) - mean(N))


Hence M2 = M2 + delta * (value - mean)

Integer based computation of standard deviation

The Welford algorithm is good but it involves floating point operations for each pixel to compute the progressive mean and variance. Whereas fundamentally we would need those floating point operations only at the end if using the original formulas, and we could use integer arithmetics for the rest. Another drawback of Welford approach is that it prevents any direct parallelization (there might still be ways to reconcile partial computations, but I have not explored those), whereas if you have a set of pixels, you can conceptually divide it in as many subsets you want, and for each subset compute its local minimum, maximum, sum of values and sum of square values. Merging subsets is then trivial: take the minimum of minimums, maximum of maximums, sums of sum of values and sums of sum of square values.

Let us consider the case of pixels whose type is unsigned byte, ie with values in the range [0,255]. We want to compute
std_dev = sqrt(N * sum_of_square_values - sum_of_values^2) / N
For practical reasons, we want N, sum_of_square_values and sum_of_values to fit on a 64bit unsigned integer (uint64), which is the largest natural integral type that can be easily and efficiently used on today's CPUs. The most limiting factor will be sum_of_square_values. Given that in the worse case, a square value is equal to 255*255, the maximum number of pixels N we can address is (2^64-1) / (255*255) = 283 686 952 306 183, which is large enough to represent a raster of 16 million pixels x 16 million pixels. Good enough.

We know need to be able to multiply two uint64 values and get the result as a uint128, and compute the difference of two uint128 values. The multiplication on Intel/AMD CPUs in 64bit mode natively yields to a 128 bit wide result. It is just that there is no standardized way in C/C++ how to get that result. For GCC compiler in 64 bit mode, the __uint128_t type can be used in a transparent way
to do that :
__uint128_t result = (__uint128_t)operand_64bit * other_operand_64bit
For Visual Studio compilers in 64 bit mode, a special instruction _umul128() is available.

What about non-Intel or non-64bit CPUs ? In that case, we have to do the multiplication at hand by decomposing each uint64 values into its lower uint32 and uint32 parts, doing 4 uint32*uint32->uint64 multiplications, summing the intermediary results, handling the carries and building the resulting number. Not very efficient, but we do not really care about that, since it is just a final operation.

To make it is easier, that partial 128 bit arithmetics is abstracted in a GDALUInt128 C++ class that has different implementations regarding the CPU and compiler support.

Now that we have solved the final part of the computation, we can then write
the computation loop as following :

    minimum = maximum = value[0]
    foreach value:
        if value < minimum: minimum = value
        else if value > maximum: maximum = value
        sum = sum + value
        sum_square = sum_square + value * value


Can we do better ? A bit of loop unrolling can help :

    minimum = maximum = value[0]
    foreach value pair (value1, value2):
        if value1 < minimum: minimum = value1
        else if value1 > maximum: maximum = value1
        sum = sum + value1
        sum_square = sum_square + value1 * value1
        if value < minimum: minimum = value2
        else if value > maximum: maximum = value2
        sum = sum + value2
        sum_square = sum_square + value2 * value2
    (deal with potential remaining pixel if odd number of pixels)


If we start with comparing value1 and value2, we can actually save a comparison (resulting in 3 comparisons for each pair of pixel, instead of 4) :

    minimum = maximum = value[0]
    foreach value pair (value1, value2):
        if value1 < value2:
            if value1 < minimum: minimum = value1
            if value2 > maximum: maximum = value2
        else:
            if value2 < minimum: minimum = value2
            if value1 > maximum: maximum = value1
        sum = sum + value1
        sum_square = sum_square + value1 * value1
        sum = sum + value2
        sum_square = sum_square + value2 * value2
    (deal with potential remaining pixel if odd number of pixels)


This improvement can already dramatically reduce the computation time from
1m10 to 7s, to compute 50 times the statistics on a 10000 x 10000 pixel raster.

Parallelization with SSE2

We have not yet explored the parallelization of the algorithm. One way to do it would be to use multi-threading, but for Intel-compatible CPU, we can also explore the capabilities of the SIMD (Single Instruction/Multiple Data) instruction set. On 64bit Intel, the SSE2 instruction set, which offers vectorized operations on integers, is guaranteed to be always present. 16 registers (XMM0 to XMM15) are available, each 128 bit wide.

So each register is wide enough to hold 16 packed int8/uint8, 8 packed int16/uint16, 4 packed int32/uint32 or 2 packed int64/uint64, depending on the wished representation. A diverse set of operations are offered and generally operate on the sub-parts of each register independently. For example c=_mm_add_epi8(a,b) will add independently c[i]=a[i]+b[i] for i=0 to 15, and that in just one CPU cycle._mm_add_epi16() will work on packed uint16, etc. To add some salt, not all operators are available for all elementary subtypes however.

Compilers are supposed to be able to automatically vectorize some C code, but in practice they rarely manage to do so for real world code, hence requiring the programmer to use the SIMD instruction set at hand. All major compilers (gcc, clang, Visual Studio C/C++) offer access to the SSE2 instruction set through "intrinsics", which are C inline functions that wrap the corresponding assembly instructions, but while still being C/C++. This allows the compiler to do the register allocation and various other optimizations (such as re-ordering), which is a huge win over coding directly in assembly. The Intel intrinsics guide is a useful resource to find the appropriate intrinsics.

So a temptative vectorized version of our algorithm would be :

    v_minimum = vector_of_16_bytes[0]
    v_maximum = vector_of_16_bytes[0]
    v_sum = vector_of_16_zeros
    v_sum_square = vector_of_16_zeros

    foreach vector_of_16_bytes v:
        v_minimum = vector_minimum(v_minimum, v)
        v_maximum = vector_maximum(v_maximum, v)
        v_sum = vector_add(v_sum, v)
        v_sum_square = vector_sum(v_sum_square, vector_mul(v, v))

    minimum = minimum_of_16_values(v_minimum)
    maximum = maximum_of_16_values(v_minimum)
    sum = sum_of_X??_values(v_sum)
    sum_square = sum_of_X??_values(v_sum_square)
    (deal with potential remaining pixels if number of pixels is not multiple of 16)


vector_minimum and vector_maximum do exist as _mm_min_epu8 and _mm_max_epu8. But for vector_add, which variant to use _mm_add_epi8, _mm_add_epi16, _mm_add_epi32 or _mm_add_epi64 ? Well, none directly. We want to add uint8 values, but the result cannot fit on a uint8 (255+255=510). The same holds for sum_square. The result of each square multiplication requires at least a uint16, and we want to loop several times, so we need at least a width of uint32 to hold the accumulation. We designed the overall algorithm to be able to handle an accumulator of uint64, but this would decrease the performance of the vectorization if using that in the tigher loop. So we will decompose our loop into one upper loop and and one inner loop. The inner loop will do as many iterations as possible, while still not overflowing a uint32 accumulator. So (2^32-1)/(255*255) = 66051.xxxx iterations. Which we round down to the closest multiple of 16.

So what about v_sum = vector_add(v_sum, v) ?
The first idea would be to extract the 4 lowest order bytes of v, unpack them so that they fit each on a uint32 and then use _mm_add_epi32 to add them in the v_sum accumulator.

    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(v, zero), zero)
_mm_unpacklo_epi8(v, zero) expands the 8 lowest order bytes of v as 8 uint16. And similarly _mm_unpacklo_epi16(v, zero)  expands the 4 lowest order uint16 of v as 4 uint32.

And then repeat that with the 3 other groups of 4 bytes :

    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(_mm_shuffle_epi32(v, 1), zero), zero)
    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(_mm_shuffle_epi32(v, 2), zero), zero)
    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(_mm_shuffle_epi32(v, 3), zero), zero)


But we can do better thans to the _mm_sad_epu8 intrinsics. It is designed to "compute the absolute differences of packed unsigned 8-bit integers in a and b, then horizontally sum each consecutive 8 differences to produce two unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low 16 bits of 64-bit elements in dst." If we notice that ABS(x-0) = x when x >= 0, then it does what we want.

    v_sum = _mm_add_epi64(v_sum, _mm_sad_epu8(v, zero))

Pedantic note: we can actually use _mm_add_epi32, since there is no risk of overflow : 8 * 66051 * 255 fits on a uint32. The advantage of using _mm_add_epi32 is that as we will use it elsewhere, the compiler can re-order additions to group them in pairs and benefit from their 0.5 throughput.

_mm_sad_epu8() has a relatively high latency (5 cycles), but it is still a big win since it replaces 14 intrinsics of our initial version.

What about the computation of the square value ? There is no mnemonics to directly multiply packed bytes and get the resulting packed uint16 (or even better uint32, since that is the type we want to operate on eventually to be able to do several iterations of our loop!). One approach would be to take the 8 lowest order bytes, un-pack them to uint16, use the  _mm_mullo_epi16() intrinsics that does uint16 x uint16->uint16. Then you would take the 4 lowest order uint16 of this intermediate result, un-pack them to uint32 and finally use _mm_add_epi32 to accumulate them in v_sum_square.

    v_low = _mm_unpacklo_epi8(v, zero)
    v_low_square = _mm_mullo_epi16(v_low, v_low)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_unpacklo_epi16(v_low_square, zero)


Then repeat the operation with the 4 upper order uint16 of the intermediate result.

    v_sum_square = _mm_add_epi32(v_sum_square,
        _mm_unpacklo_epi16(_mm_shuffle_epi32(v_low_square, 2 | (3 <<2)), zero) )


_mm_shuffle_epi32(v, 2 | (3 <<2) is a trick to replicate the high 64 bits of a XMM register into its low 64 bits. We don't care about the values of the resulting high 64 bits since they will be lost with the later unpack operations.

And then repeat the whole process with the 8 highest order bytes.

    v_high = _mm_unpacklo_epi8(_mm_shuffle_epi32(v, 2 | (3 <<2)), zero)
    v_high_square = _mm_mullo_epi16(v_high, v_high)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_unpacklo_epi16(v_high_square, zero)
    v_sum_square = _mm_add_epi32(v_sum_square,
        _mm_unpacklo_epi16(_mm_shuffle_epi32(v_high_square, 2 | (3 <<2)), zero) )


We can actually do much better with the _mm_madd_epi16() mnemonics that "Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers, and pack the results". This is really close to what we need. We just need to prepare uint16/int16 integers (the sign convention here does not matter since a uint8 zero-extended to 16 bit is both a uint16/int16)

    v_low_16bit = _mm_unpacklo_epi8(v, zero)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_madd_epi16(v_low_16bit, v_low_16bit))
    v_high_16bit = _mm_unpacklo_epi8(_mm_shuffle_epi32(v, 2 | (3 <<2)), zero)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_madd_epi16(v_high_16bit, v_high_16bit))


The latencies and throughput of _mm_mullo_epi16 and _mm_madd_epi16 are the same, so the second version is clearly a big win.

Use of AVX2

We can tweak performance a bit by doing a 2x loop unrolling, which will enable the compiler to re-order some operations so that those who have a throughput of 0.5 cycle to be consecutive (such as _mm_add_epi32, _mm_unpacklo_epi8) and thus be able to executive 2 of them in a single cycle. When doing so, we can notice that we are operating on a virtual 256 bit register. But 256 bit registers do exist in the AVX2 instruction set, that was introduced in relatively recent hardware (2013 for Intel Haswell). AVX/AVX2 offer the YMM registers, equivalent of XMM registers but on a doubled bit width (the 128 bit low part of a YMM register is its corresponding XMM register). One particularity of the YMM register is that it operates on quite distinct "128 bit lanes", but you can still extract each lane.

The port to AVX2 is quite straightforward :

    v = _mm256_load_si256(data + i)
    v_sum = _mm256_add_epi32(v_sum, _mm256_sad_epu8(v, zero))
    v_low_16bit = _mm256_cvtepu8_epi16(_mm256_extracti128_si256(v, 0));
    v_sum_square = _mm256_add_epi32(v_sum_square, _mm256_madd_epi16(v_low_16bit, v_low_16bit))
    v_high_16bit = _mm256_cvtepu8_epi16(_mm256_extracti128_si256(v, 1));
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_madd_epi16(v_high_16bit, v_high_16bit))


_mm256_extracti128_si256(v,0) extracts the 128 bit lower part of the register,
and _mm256_extracti128_si256(v,1) the 128 bit upper part.

The good news is that we can have a single code base for the SSE2 and AVX2 variants, by using the AVX2 code. In the case of SSE2, we in fact define the _mm256 functions with their corresponding _mm 128bit functions operating on the low and high 128 bit parts. For example:

static inline GDALm256i GDALmm256_add_epi32(GDALm256i r1, GDALm256i r2)
{
    GDALm256i reg;
    reg.low = _mm_add_epi32(r1.low, r2.low);
    reg.high = _mm_add_epi32(r1.high, r2.high);
    return reg;
}

The AVX2-with-SSE2 emulation can be found in :
https://github.com/OSGeo/gdal/blob/trunk/gdal/gcore/gdal_avx2_emulation.hpp

Thanks to inlining and usual compiler optimizations, this will be equivalent to our hand 2x unrolled version ! The final code is here.

Regarding timings, our SSE2-emulated AVX2 version runs in 1.6s, so roughly a 4x time improvement with respect to the portable optimized C version. On a hardware capable of AVX2, the pure AVX2 version is 15% faster than the SSE2-emulated version. So definitely not enough to justify a dedicated code path, but here as we have a unified one, it comes almost for free. Provided that the code is explicitly compiled to enable AVX2.

Nodata values

Up to now, we have ignored the potential existence of nodata values. When computing statistics, we do not want pixels that match the nodata value to be taken into account in the minimum, maximum, mean or standard deviation.

In the pure C approach, this is easy. Just ignore pixels that match the nodata value:

    minimum = maximum = value[0]
    foreach value:
        if value != nodata:
            valid_pixels = valid_pixels + 1
            minimum = min(minimum, value)
            maximum = max(minimum, value)
            sum = sum + value
            sum_square = sum_square + value * value


We cannot directly translate that with SSE2/AVX2 mnemonics since the result of the value != nodata test can be different for each of the 32 packed bytes of the (virtual) AVX2 register, and making tests for each components of the vector register would kill performance to a point where it would be worse than the pure C approach !

We can however rewrite the above in a vector friendly way with :

    minimum = maximum = first value that is not nodata
    neutral_value = minimum (or any value in final [min,max] range that is not nodata)
    foreach value:
        validity_flag = if (value != nodata) 0xFF else 0
        value_potentially_set_to_zero = value & validity_flag
        value_potentially_set_to_neutral = (value & validity_flag) | (neutral_value & ~validity_flag)
        valid_pixels = valid_pixels + validity_flag / 255
        minimum = min(minimum, value_potentially_set_to_neutral)
        maximum = max(minimum, value_potentially_set_to_neutral)
        sum = sum + value_potentially_set_to_zero
        sum_square = sum_square + value_potentially_set_to_zero * value_potentially_set_to_zero


(value & validity_flag) | (neutral_value & ~validity_flag) is a quite common pattern in SIMD code to implement a if/then/else pattern without branches (for classic scalar code, if/then/else branches are more efficient due to the CPU being able to do branch prediction)

The only complication is that there is no SSE2 intrinsics for non-equality testing, so we have to transform that a bit to use equality testing only. And we will also remove the need for division in the loop :

    foreach value:
        invalidity_flag = if (value == nodata) 0xFF else 0
        value_potentially_set_to_zero = value & ~invalidity_flag
        value_potentially_set_to_neutral = (value & ~invalidity_flag) | (neutral_value & invalidity_flag)
        invalid_pixels_mul_255 = invalid_pixels_mul_255 + invalidity_flag
        minimum = min(minimum, value_potentially_set_to_neutral)
        maximum = max(minimum, value_potentially_set_to_neutral)
        sum = sum + value_potentially_set_to_zero
        sum_square = sum_square + value_potentially_set_to_zero * value_potentially_set_to_zero

    valid_pixels = total_pixels - invalid_pixels_mul_255 / 255


The computation of invalid_pixels_mul_255 in a vectorized way is the same as
v_sum, using the _mm_sad_epu8() trick. The resulting SSE2 code is :

    foreach vector_of_16_bytes v:
        v_invalidity_flag = _mm_cmpeq_epi8(v, v_nodata)
        v_value_potentially_set_to_zero = _mm_andnot_si128(v_invalidity_flag, v)
        v_value_potentially_set_to_neutral = _mm_or_si128(
            v_value_potentially_set_to_zero, _mm_and_si128(v_invalidity_flag, v_neutral))
        v_invalid_pixels_mul_255 = _mm_add_epi32(invalid_pixels_mul_255,
                                        _mm_sad_epu8(v_invalidity_flag, zero))
        [ code for min, max operating on v_value_potentially_set_to_neutral ]
        [ code for sum and sum_square operating on v_value_potentially_set_to_zero ]


The transposition to AVX2 is straightforward.

We can notice that this version that takes into account nodata value can only be used once we have hit a pixel that is not the nodata value, to be able to initialize the neutral value.

What about uint16 rasters ?


The same general principles apply. If we still want to limit ourselves to operate with at most uint64 accumulators, given that the maximum square value of a uint16 is 65535*65535, this limits to rasters of 2^64/(65535*65535) ~= 2 billion pixels, which remains acceptable for common use cases.

One oddity of the SSE-2 instruction set is that it includes only a _mm_min_epi16() / _mm_max_epi16() mnemonics, that is to say that operates on signed int16. The _mm_min_epu16() that operates on uint16 has been introduced in the later SSE 4.1 instruction set (that is quite commonly found in not so recent CPUs).

There are tricks to emulate _mm_min_epu16() in pure SSE2 using saturated subtraction and masking :

    // if x <= y, then mask bits will be set to 1.
    mask = _mm_cmpeq_epi16( _mm_subs_epu16(x, y), zero )

    // select bits from x when mask is 1, y otherwise
    min(x,y) = _mm_or_si128(_mm_and_si128(mask, x), _mm_andnot_si128(mask, y));


Another way is to shift the unsigned values by -32768, so as to operate on signed 16bit values.

This -32768 shift trick is also necessary since, like for the byte case, we want to still be able to use the _madd_epi16 intrinsics, which operates on signed int16, to compute the sum of square values. One subtelty to observe is that when you operate on 2 consecutive pixels at 0, _mm_madd_epi16 does :

 (0 - 32768) * (0 - 32768) + (0 - 32768) * (0 - 32768)
= 1073741824 + 1073741824
= 2147483648 = 2^31


Which actually overflows the range of signed int32 ( [-2^31, 2^31-1] ) ! The good news is that _mm_madd_epi16 does not saturate the result, so it will actually return 0x80000000 as a result. This should normally be interpreted as -2^31 in signed int32 convention, but as we know that the result of _madd_epi16(x,x) is necessary positive values, we can still correctly interpret the result as a uint32 value. This is where you feel lucky that Intel chose two's complement convention for signed integers.

To compute the sum of values, no nice trick equivalent to _mm_sad_epu8. So we just do it the boring way: unpack separately the 64bit low and high part of the value register from uint16 to uint32 and accumulate them with _mm_add_epi32.

Exactly as for the byte case, the uint16 case can be easily transposed to AVX2 or
emulated-AVX2.

Conclusion


Conversion between integer and floating-point operations can be costly, so avoiding them as much as possible is a big win (provided that you make sure not to overflow your integer accumulators)

In theory, the gains offered by a SSE2/AVX2 optimized version are at most limited to a factor of with_of_vector_register / with_of_elementary_type, so, for bytes and SSE2, to 16. But often the gain is lesser, so do that only when you have come to an already optimized portable C version (or if the SIMD instruction set includes a dedicated intrinsics that just do what you want)

lundi 2 mai 2016

GDAL/OGR 2.1.0 released

On behalf of the GDAL/OGR development team and community, I am pleased to announce the release of GDAL/OGR 2.1.0.  GDAL/OGR is a C++ geospatial data access library for raster and vector file formats, databases and web services.  It includes bindings for several languages, and a variety of command line tools.

The 2.1.0 release is a major new feature release with the following highlights:
  • New GDAL/raster drivers:
    • CALS: read/write driver for CALS Type I rasters
    • DB2: read/write support for DB2 database (Windows only)
    • ISCE: read/write driver
    • MRF: read/write driver for Meta Raster Format
    • SAFE: read driver for ESA SENTINEL-1 SAR products
    • SENTINEL2: read driver for ESA SENTINEL-2 L1B/LC1/L2A products
    • WMTS: read driver for OGC WMTS services
  • New OGR/vector drivers:
    • AmigoCloud: read/write support for AmigoCloud mapping platform
    • DB2: read/write support for DB2 database (Windows only)
    • MongoDB: read/write driver
    • netCDF: read/write driver
    • VDV: read/write VDV-451/VDV-452 driver, with specialization for the Austrian official open government street graph format
  • Significantly improved drivers:
    • CSV: new options, editing capabilities of existing file
    • ElasticSearch: read support and support writing any geometry type
    • GeoJSON: editing capabilities of existing file, "native data" (RFC 60) support
    • MBTiles: add raster write support. fixes in open support
    • PDF: add PDFium library as a possible back-end.
    • PLScenes: add support for V1 API
    • VRT: on-the-fly pan-sharpening
    • GTiff: multi-threaded compression for some compression methods
  • Port library: add /vsis3/, /vsis3_streaming/, /vsicrypt/ virtual file systems
  • Upgrade to EPSG database v8.8 
  • General sanitization pass to clean-up code, fix a lot of compiler warnings, as well as issues pointed by static code analyzers.
  • Fixes in a number of drivers to be more robust against corrupted files . 
You can also find more complete information on the new features and fixes in the 2.1.0.

The release can be downloaded from:
  * http://download.osgeo.org/gdal/2.1.0/gdal210.zip - source as a zip
  * http://download.osgeo.org/gdal/2.1.0/gdal-2.1.0.tar.gz - source as .tar.gz
  * http://download.osgeo.org/gdal/2.1.0/gdal-2.1.0.tar.xz - source as .tar.xz
  * http://download.osgeo.org/gdal/2.1.0/gdal-grass-2.1.0.tar.gz - source of GDAL GRASS plugin
  * http://download.osgeo.org/gdal/2.1.0/gdalautotest-2.1.0.tar.gz - test suite
  * http://download.osgeo.org/gdal/2.1.0/gdal210doc.zip - documentation/website



As there have been a few changes that affect the behaviour of the library, developers are strongly advised to read the migration guide.

dimanche 6 mars 2016

Paris OSGeo Code Sprint 2016 debrief

While my memories are still fresh, here is a report of this week of code sprinting. First, a big thanks to Olivier Courtin for organizing this event, to all sponsors that brought up the money to make it happen and to the Mozilla Foundation for hosting us in the most scenic coding venue I've ever seen.



As expected, I mostly concentrated on GDAL work. My main tasks were related to polishing and extending the work initiatied by Ari Jolma for the support of the "M dimension" of geometries, M standing for Measurement, a numeric property attach to each point/vertex and that can encode different attributes: time, lengths, or any other interesting property beyond x, y and z....
Those good old shapefiles are still a bit fancy since they do not really distinguish between XYZ and XYZM geometries up-front. In fact as soon as you have a Z component, the Shapefile specification requires a M value to be encoded, even if not used. There's consequently a nodata value (any value lower than -10^38) for such cases. As M geometries are a bit esoteric, we want to avoid to report them when not being used. Consequently a heuristics has been added to the shapefile driver to probe by default the first shape in the file and checks if it has meaningful M values. If not, the layer geometry type is just declared as being XYZ. This should help with backward compatibility of software using GDAL. Implemented per r33538 and r33539.
The support of M in the CSV driver was more straightforward (r33544) due to the bulk of the work being of course done in the WKT importer/exporter.
Regarding the GeoPackage driver, the main need was to be able to parse correctly geometry headers for XYM or XYZM bounding boxes that may be found. The main difficulty was to test that since OGR itself just generates XY or XYZ bounding boxes, so editing hexadecimal WKB was needed. Somewhat amusing with a broken laptop screen. Anyway, was done through r33551
Support for M geometries in SQLite/Spatialite required a number of small changes scattered through the driver code base, and new tests for the various variants (regular geometries vs compressed ones). The upgrade of this driver makes it also possible to use XYM/XYZM geometries with the SQLite SQL dialect usable by all other drivers. Implemented per r33554
The upgrade of the FileGDB and OpenFileGDB drivers gave me some headaches as it turned out the support of writing M values in the older FileGDB SDK 1.3 was broken. After upgrading to v1.4, things went much more smoothly. Support for M with FileGDB v9.X. Implemented per r33563 . For the nostalgics, the PGeo driver should also benefit from those changes, although this wasn't tested.

On the MapServer front, in the middle of many other things, Thomas Bonfort merged in time for MapServer 7.0.1 an older pull request from mine that I had forgotten to support 64 bit integer fields that may now come with GDAL 2.0. I also backported a fix to handle WMS TIME on contour layers, in time for MapServer 6.4.3.

Aside for my own coding, I enjoyed spending time with other developers to help them on their GDAL tasks. With Rob Emmanuele, we tried to figure out how to make the "driver" that handles files accessible through HTTP/HTTPS to better report errors, especially on Amazon S3 storage, so that upper library or application layers can better deal with them. In particular, you want to be able to distinguish an inexting ressource (typo in the URL for example), from a valid one but for which you have not specified the right credentials. This turned out to be much more difficult as I would have myself anticipated, since there are a lot of situations where we want errors to accessing files to be silent (for example when drivers probe from potential "sidecar" files that accompany main files. Think to the .prj, .wld, .aux files), and there's no way in the current design to know when to be verbose or not. Rob finally came with a design of a file system error reporthing mechanism, that is not verbose by default, but that may be queried by the code paths that want to report errors in a verbose way. This is still work in progress, but hopefully Rob should be able to polish it to be included in the upcoming GDAL 2.1 release (feature freeze at the end of this month).

With Yann Chemin, we had quite of fun exploring how to better support the catalog of spatial reference systems published by the IAU (International Astronomical Union) that describes the SRS used for other planets and satellites. In particular, we discovered that some of those SRS used the Oblique Cylindrical Equal Area (OCEA) projection. This projection is supported by proj.4 (thanks to Howard Butler for designing a modern website for this not always sexy but so fundamental piece of software that is proj.4), but not by the OGR Spatial Refrence (OSR) component of GDAL itself. The main challenge to make it available through OSR is to be able to map the proj.4 parameters of the projection to parameter names in WKT. Documentation to do that is generally scarce, and we ended up opening the bible of the projection experts, that is to say "Map Projections - A Working Manual", by John P. Snyder, USGS Professional Paper 1395, whose proj.4 is mostly the translation in C code. The book gave some light at its page 80 regarding the OCEA projection. The interesting part of OCEA is that it comes with 2 variants... The gist of the support is now in this pull request, with some more work and research to clarify the remaining mysteries. In the meantime, GRASS can now benefits from IAU codes (r67950 and r67951)

Always wondering about the possible command line switches of GDAL/OGR utilities ? Guillaume Pasero contributed a bash completion script to improve your user experience.

$ ogr2ogr - (TAB character pressed)
-append --debug -dsco --format --help-general --locale --optfile -preserve_fid -skipfailures -sql -update
-a_srs -dialect -f --formats -lco -nln -overwrite -progress -spat -s_srs --version
--config -dim -fid -geomfield --license -nlt --pause -select -spat_srs -t_srs -where


Regine Obe also worked on improving the ODBC support in OGR: build support of Windows ODBC libarries with the mingw64 compiler, ability to support a large number of columns in tables.