| Commit message (Collapse) | Author | Age | Files |
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
Rather than putting everything in the 'postgis' schema using ':' as
hierarchy separator.
When the destination dataset is not PostgreSQL, the layers names are
prefixed using '$SCHEMA.' instead of '$SCHEMA:'
|
| |
|
|
|
|
|
| |
manually.
This assumes gdal ≥3.7, see
https://gdal.org/en/stable/api/python/raster_api.html#osgeo.gdal.Dataset.ExecuteSQL .
|
| | |
|
| |
|
|
| |
To match the layer definition found upstream.
|
| |
|
|
| |
(Based on Mozilla Firefox 140.4.0esr.)
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
When the output driver supports altering GeomFieldDefn's nullable flag
after the fact. PostgreSQL supports that. GPKG doesn't, but one can
use GEOMETRY_NULLABLE=NO in layer creation options instead.
See https://gdal.org/en/stable/development/rfc/rfc53_ogr_notnull_default.html
This commit partially reverts 2c3ee79cb434fc4cf315ee3a6a526156053d76c4.
|
| | |
|
| |
|
|
|
|
| |
Rather than CreateLayer(). Unfortunately not very helpful since
.SetNullable() appears to be a no-op; it doesn't set a NOT NULL
constraint on the geometry column.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
To avoid JSON-encoding on the Python side.
|
| |
|
|
|
| |
Recent GDAL version can set comments on columns, which is now reflected
on the schema.
|
| |
|
|
|
| |
Older GDAL version need the destination path to be a string not a
path-like argument.
|
| | |
|
| |
|
|
| |
For consistency with raster handling.
|
| |
|
|
|
|
|
|
|
|
| |
Raster data is not stored in the PostGIS database. Instead, the mtime
of the target directory is used to determine whether the COG is up to
date.
Add a new flag --metadata-compress for JSON metadata compression (which
also applies to MVT metadata), and --rasterdir for the target raster
directory.
|
| |
|
|
| |
Add rename `administrativindelning` layer group to `adm`.
|
| |
|
|
| |
been updated upstream.
|
| |
|
|
|
| |
This reverses commit 86f1e635a647e94c33e10e4d230853fd5d84c0bc. It
appears that .zip files have a consistent schema now…
|
| | |
|
| |
|
|
|
|
|
| |
“Age” is misleading as it suggests a value that increases over time as
times stretches away from a fixed epoch. A timestamp (ts) is a more
appropriate term for that field (it's rather a “daystamp” but that's a
matter of precision).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows client-side filtering.
The "age" attribute is a signed short (int16) expressing the number of
days since 1970-01-01. This covers the range 1880-04-15 to 2059-09-18
which should be more than enough. The source value is a Date or
Datetime and depends on the source layer.
- For vbk:*, it's the date at which the project was last saved in
Vindbrukskollen.
- For mrr:*, it's the date at which the application came to
Bergsstaten for applications, and decision date for granted permits.
- For avverk:*, it's the date at which the application came to
Skogsstyrelsen for applications, and the clearcut date (according to
Skogsstyrelsen) for completed objects.
For other layers, we don't export that attribute.
This makes the tiles a bit larger. Before (without the extra
attribute):
vbk: 1261× tiles, 599kiB uncompressed (avg=487B/t), 389kiB brotli (35%, avg=316B/t)
avverk: 3734× tiles 115MiB uncompressed (avg=32kiB/t), 72MiB brotli (37%, avg=20kiB/t)
mrr: 1324× 331kiB uncompressed (avg=257kiB/t), 289kiB brotli (13%, avg=223B/t)
→ total 121631367B uncompressed (avg=19kiB/t), 76692807B brotli (37%, avg=12kiB/t)
After (with the extra attribute):
vbk: 1261× tiles, 714kiB uncompressed (avg=580B/t), 425kiB brotli (40%, avg=345B/t)
avverk: 3734× tiles 127MiB uncompressed (avg=35kiB/t), 78MiB brotli (39%, avg=21kiB/t)
mrr: 1324× 323kiB uncompressed (avg=322kiB/t), 342kiB brotli (18%, avg=265B/t)
→ total 134274796B uncompressed (avg=21kiB/t), 82264731B brotli (39%, avg=13kiB/t)
Summary:
+12.1MiB uncompressed (+10.4%, avg=+1.95kiB/t)
+5.3MiB compressed (+7.3%, avg=+882B/t)
|
| |
|
|
|
| |
This avoids crashing when `webmap-import` is run as another user on an
homedir with restrictive permissions, for instance.
|
| |
|
|
| |
a stable attribute.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
“lst:riksintresse_*”.
|
| |
|
|
|
| |
And “sks:AvverkAnm” resp. “sks:UtfordAvverk” to “sks:avverk_anmald”
resp. “sks:avverk_utford”.
|
| |
|
|
|
| |
It's clearer that way, after all Naturvårdsavtal and Återvätningsavtal
are not part of Naturvårdsverkets Naturvårdsregistret.
|
| |
|
|
|
| |
Including k-skog, but we leave that one alone for now as the script
lacks some logic to process raster files.
|
| | |
|
| | |
|
| |
|
|
| |
Inspired from https://vbk.lansstyrelsen.se/ .
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
LST.vbk_havsbaserad_vindkraft.
It appears these source file are exported daily at 01:53, then again at
07:41 but different content and schema…. We're now targeting the
latter.
This reverts commit 59a0ebbb824eaa75dcf7b0e39c5b627fb5ace294 and
partially reverts commit 2d8f839914feb2b9d31f2682b1c9992dd21c9b16.
|
| |
|
|
|
|
|
| |
All features have the same export date value, so the date is arguably a
schema attribute not a feature attribute. Since exports from the
mineral register happens daily nowdays the date should never be more
than a day off compared with the zip file's mtime.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The file appears to be official and updated as often as the GPKG
nowdays. Both have 101 distinct features.
And unlike the GPKG the multipolygons are not split, so we add the
UNIQUE CONSTRAINT on "OmrID".
For some reason there are a lot of duplicates in the GPKG.
sqlite> SELECT "ObjectID", COUNT(*) FROM "lst_vbk_havsbaserad_vindkraft" GROUP BY 1 ORDER BY 2;
101|9
1|24
2|24
[…]
100|24
|
| |
|
|
| |
So the information can be exposed to the webmap's info dialog.
|
| |
|
|
| |
We don't want to bump ctimes since they are used in the layer cache.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Web Application doesn't need the original full/non-simplified
geometry. After all, OpenLayers has fetched the tiles already and the
(visible part of) the geometry is already cached in the target SRS with
sensible simplification factors.
So there is really no need to transfer megabytes of high-precison data
to the client to highlight the feature. This changes means that CGI
responses will remain small hence can be buffered.
To further speed up things one could store geometry area/perimeter/length
as generated columns (along with the list of municipality and county
codes) and serve row directly from PostgreSQL (bypassing PostGIS). Also
since buffering is now an option the rows don't need be served in the
same order as the request, so one could batch queries using ANY[] or IN.
|