| Commit message (Collapse) | Author | Age | Files |
|
|
|
|
|
|
|
|
|
| |
When the output driver supports altering GeomFieldDefn's nullable flag
after the fact. PostgreSQL supports that. GPKG doesn't, but one can
use GEOMETRY_NULLABLE=NO in layer creation options instead.
See https://gdal.org/en/stable/development/rfc/rfc53_ogr_notnull_default.html
This commit partially reverts 2c3ee79cb434fc4cf315ee3a6a526156053d76c4.
|
| |
|
|
|
|
|
|
| |
Rather than CreateLayer(). Unfortunately not very helpful since
.SetNullable() appears to be a no-op; it doesn't set a NOT NULL
constraint on the geometry column.
|
| |
|
| |
|
| |
|
|
|
|
| |
To avoid JSON-encoding on the Python side.
|
|
|
|
|
| |
Recent GDAL version can set comments on columns, which is now reflected
on the schema.
|
|
|
|
|
| |
Older GDAL version need the destination path to be a string not a
path-like argument.
|
| |
|
|
|
|
| |
For consistency with raster handling.
|
|
|
|
|
|
|
|
|
|
| |
Raster data is not stored in the PostGIS database. Instead, the mtime
of the target directory is used to determine whether the COG is up to
date.
Add a new flag --metadata-compress for JSON metadata compression (which
also applies to MVT metadata), and --rasterdir for the target raster
directory.
|
|
|
|
| |
Add rename `administrativindelning` layer group to `adm`.
|
|
|
|
| |
been updated upstream.
|
|
|
|
|
| |
This reverses commit 86f1e635a647e94c33e10e4d230853fd5d84c0bc. It
appears that .zip files have a consistent schema now…
|
| |
|
|
|
|
|
|
|
| |
“Age” is misleading as it suggests a value that increases over time as
times stretches away from a fixed epoch. A timestamp (ts) is a more
appropriate term for that field (it's rather a “daystamp” but that's a
matter of precision).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows client-side filtering.
The "age" attribute is a signed short (int16) expressing the number of
days since 1970-01-01. This covers the range 1880-04-15 to 2059-09-18
which should be more than enough. The source value is a Date or
Datetime and depends on the source layer.
- For vbk:*, it's the date at which the project was last saved in
Vindbrukskollen.
- For mrr:*, it's the date at which the application came to
Bergsstaten for applications, and decision date for granted permits.
- For avverk:*, it's the date at which the application came to
Skogsstyrelsen for applications, and the clearcut date (according to
Skogsstyrelsen) for completed objects.
For other layers, we don't export that attribute.
This makes the tiles a bit larger. Before (without the extra
attribute):
vbk: 1261× tiles, 599kiB uncompressed (avg=487B/t), 389kiB brotli (35%, avg=316B/t)
avverk: 3734× tiles 115MiB uncompressed (avg=32kiB/t), 72MiB brotli (37%, avg=20kiB/t)
mrr: 1324× 331kiB uncompressed (avg=257kiB/t), 289kiB brotli (13%, avg=223B/t)
→ total 121631367B uncompressed (avg=19kiB/t), 76692807B brotli (37%, avg=12kiB/t)
After (with the extra attribute):
vbk: 1261× tiles, 714kiB uncompressed (avg=580B/t), 425kiB brotli (40%, avg=345B/t)
avverk: 3734× tiles 127MiB uncompressed (avg=35kiB/t), 78MiB brotli (39%, avg=21kiB/t)
mrr: 1324× 323kiB uncompressed (avg=322kiB/t), 342kiB brotli (18%, avg=265B/t)
→ total 134274796B uncompressed (avg=21kiB/t), 82264731B brotli (39%, avg=13kiB/t)
Summary:
+12.1MiB uncompressed (+10.4%, avg=+1.95kiB/t)
+5.3MiB compressed (+7.3%, avg=+882B/t)
|
|
|
|
|
| |
This avoids crashing when `webmap-import` is run as another user on an
homedir with restrictive permissions, for instance.
|
|
|
|
| |
a stable attribute.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
“lst:riksintresse_*”.
|
|
|
|
|
| |
And “sks:AvverkAnm” resp. “sks:UtfordAvverk” to “sks:avverk_anmald”
resp. “sks:avverk_utford”.
|
|
|
|
|
| |
It's clearer that way, after all Naturvårdsavtal and Återvätningsavtal
are not part of Naturvårdsverkets Naturvårdsregistret.
|
|
|
|
|
| |
Including k-skog, but we leave that one alone for now as the script
lacks some logic to process raster files.
|
| |
|
| |
|
|
|
|
| |
Inspired from https://vbk.lansstyrelsen.se/ .
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
LST.vbk_havsbaserad_vindkraft.
It appears these source file are exported daily at 01:53, then again at
07:41 but different content and schema…. We're now targeting the
latter.
This reverts commit 59a0ebbb824eaa75dcf7b0e39c5b627fb5ace294 and
partially reverts commit 2d8f839914feb2b9d31f2682b1c9992dd21c9b16.
|
|
|
|
|
|
|
| |
All features have the same export date value, so the date is arguably a
schema attribute not a feature attribute. Since exports from the
mineral register happens daily nowdays the date should never be more
than a day off compared with the zip file's mtime.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The file appears to be official and updated as often as the GPKG
nowdays. Both have 101 distinct features.
And unlike the GPKG the multipolygons are not split, so we add the
UNIQUE CONSTRAINT on "OmrID".
For some reason there are a lot of duplicates in the GPKG.
sqlite> SELECT "ObjectID", COUNT(*) FROM "lst_vbk_havsbaserad_vindkraft" GROUP BY 1 ORDER BY 2;
101|9
1|24
2|24
[…]
100|24
|
|
|
|
| |
So the information can be exposed to the webmap's info dialog.
|
|
|
|
| |
We don't want to bump ctimes since they are used in the layer cache.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Web Application doesn't need the original full/non-simplified
geometry. After all, OpenLayers has fetched the tiles already and the
(visible part of) the geometry is already cached in the target SRS with
sensible simplification factors.
So there is really no need to transfer megabytes of high-precison data
to the client to highlight the feature. This changes means that CGI
responses will remain small hence can be buffered.
To further speed up things one could store geometry area/perimeter/length
as generated columns (along with the list of municipality and county
codes) and serve row directly from PostgreSQL (bypassing PostGIS). Also
since buffering is now an option the rows don't need be served in the
same order as the request, so one could batch queries using ANY[] or IN.
|
| |
|
|
|
|
|
| |
We subdivide administrative polygons to speed up things, cf.
https://symphony.is/about-us/blog/boosting-postgis-performance
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This causes overhead on some geometries though. Querying 100000 random
features in each table and measuring yields the following.
Without geom_area/geom_perimeter/geom_length
============================================
nvr:SCI_Rikstackande (forced to 2D)
Time: min=0.000s, max=0.668s, avg=0.003s
ren:riks_ren
Time: min=0.000s, max=0.090s, avg=0.012s
sametinget:flyttled
Time: min=0.000s, max=0.003s, avg=0.000s
sks:UtfordAvverk
Time: min=0.000s, max=0.180s, avg=0.001s
With geom_area/geom_perimeter/geom_length
=========================================
nvr:SCI_Rikstackande (forced to 2D)
Time: min=0.000s, max=1.242s, avg=0.004s
ren:riks_ren
Time: min=0.000s, max=0.092s, avg=0.012s
sametinget:flyttled
Time: min=0.000s, max=0.016s, avg=0.000s
sks:UtfordAvverk
Time: min=0.000s, max=0.190s, avg=0.001s
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
And skip geometry columns other than the first one. Some sources
(svk:stolpar, nvr:SPA_Rikstackande, nvr:OSPAR, svk:stationsomraden,
nvr:HELCOM, nvr:SCI_Rikstackande) come as 3D and we don't flatten them
during import.
Dropping the Z (and/or M) component during GeoJSON export avoids
transfering data which will anyway be ignored by OpenLayer. The
overhead of ST_Force2D() seems negligible in comparison with
ST_AsGeoJSON(). Querying 100000 random features in each table and
measuring yields the following.
Simple query
============
SELECT convert_to(ST_AsGeoJSON(m.*,geom_column=>'wkb_geometry',pretty_bool=>'f'),'UTF8') AS "GeoJSON"
FROM postgis."nvr:SCI_Rikstackande" m WHERE m."ogc_fid" = %s
nvr:HELCOM
Time: min=0.000s, max=0.002s, avg=0.000s
Size: min=1.24kiB, max=24.23kiB, avg=9.47kiB
nvr:SCI_Rikstackande
Time: min=0.000s, max=0.663s, avg=0.002s
Size: min=0.84kiB, max=61613.04kiB, avg=259.67kiB
(Yes that's >50MiB, cf. SE0820430 Torne och Kalix älvsystem.)
nvr:SPA_Rikstackande
Time: min=0.000s, max=0.008s, avg=0.001s
Size: min=1.36kiB, max=392.37kiB, avg=35.23kiB
svk:stolpar
Time: min=0.000s, max=0.003s, avg=0.000s
Size: min=0.19kiB, max=0.20kiB, avg=0.19kiB
svk:stationsomraden
Time: min=0.000s, max=0.003s, avg=0.000s
Size: min=0.35kiB, max=1.22kiB, avg=0.56kiB
sks:UtfordAvverk
Time: min=0.000s, max=0.023s, avg=0.001s
Size: min=0.51kiB, max=1117.39kiB, avg=9.75kiB
Advanced query
==============
WITH feature AS (
SELECT m."ogc_fid",
m.…
ST_Force2D(m."wkb_geometry") AS "wkb_geometry",
%s AS layer_group,
%s AS layer
FROM postgis."nvr:SCI_Rikstackande" m
WHERE m."ogc_fid" = %s
)
SELECT convert_to(ST_AsGeoJSON(feature.*,geom_column=>'wkb_geometry',pretty_bool=>'f'),'UTF8') AS "GeoJSON"
FROM feature
(We add `layer_group` and `layer` to the later, which adds 48bytes — the
size of `, "layer_group": "layer_group", "layer": "layer"` to the
output.)
nvr:HELCOM
Time: min=0.000s, max=0.004s, avg=0.000s
Size: min=1.21kiB, max=22.62kiB, avg=8.86kiB
nvr:SCI_Rikstackande
Time: min=0.000s, max=0.893s, avg=0.002s
Size: min=0.88kiB, max=57799.83kiB, avg=243.62kiB
nvr:SPA_Rikstackande
Time: min=0.000s, max=0.011s, avg=0.001s
Size: min=1.37kiB, max=367.70kiB, avg=33.13kiB
svk:stolpar
Time: min=0.000s, max=0.010s, avg=0.000s
Size: min=0.23kiB, max=0.23kiB, avg=0.23kiB
svk:stationsomraden
Time: min=0.000s, max=0.004s, avg=0.000s
Size: min=0.38kiB, max=1.21kiB, avg=0.58kiB
sks:UtfordAvverk (already in 2D so skip ST_Force2D() call)
Time: min=0.000s, max=0.085s, avg=0.001s
Size: min=0.55kiB, max=1117.44kiB, avg=9.80kiB
|
| |
|
|
|
|
|
| |
application() returns a iterator, so we serve responses immediately as
they as received from the server (no buffering).
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids exporting features properties and geometries as GeoJSON.
(We could do the same for tiles, but it would be too slow.) The
downside is that the tables can't be read from while `webmap-import` is
running, so metadata lookup might briefly timeout during the daily
update.
It's still worth doing as it greatly simplifies deployment and saves
GiB of disk space.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Having a shared temporary directory, flock(2)'ed to avoid races, is a
great idea in theory but unfortunately doesn't work so well with
systemd.exec(5)'s ReadWritePaths settings since
ReadWritePaths=/var/www/webmap/tiles
ReadWritePaths=/var/www/webmap/tiles.tmp
creates multiple mount points pointing at the same file system and
rename(2)/renameat2(2) can't cope with that. Quoting the manual:
EXDEV oldpath and newpath are not on the same mounted filesystem.
(Linux permits a filesystem to be mounted at multiple points,
but rename() does not work across different mount points, even
if the same filesystem is mounted on both.)
So the options are to either use a single ReadWritePaths=/var/www/webmap,
or --mvtdir-tmp=/var/www/webmap/tiles/.tmp. Both kind of defeat the
point (we'd in fact want to use --mvtdir-tmp=/var/tmp/webmap/tiles), so
we use mkdtemp(3) instead.
|