Opened 16 years ago
Last modified 16 years ago
#972 new enhancement
do locking when features are changed in PostGIS tables
Reported by: | msieczka | Owned by: | jef |
---|---|---|---|
Priority: | major: does not work as expected | Milestone: | Version 2.0.0 |
Component: | Data Provider | Version: | Trunk |
Keywords: | Cc: | ||
Must Fix for Release: | No | Platform: | All |
Platform Version: | Awaiting user input: | no |
Description
QGIS doesn't prevent multiple users from editing the same table at the same time, which sooner or later results in overlapping features and other surprises that ruin the party. I asked on Postgres ML whether PostGIS client software should do locking while table is being edited [1], and their answer is yes:
The application should either: 1. Take an advisory lock (see the functions/admin functions chapter) so that it can use another table to indicate which parts of the GIS are in use. 2. Check to see if the data changed while the user was editing but before committing (known as "optimistic locking"). Then give the user the option to overwrite/rollback. A last resort would be locking rows or the whole table, since a user might click "edit" then go to lunch. Certainly doing nothing isn't much use if you have multiple users editing.
Please fix QGIS in this regard. There's been a discussion about this on QGIS dev ML [2].
[1]http://archives.postgresql.org/pgsql-general/2008-02/msg01285.php [2]http://www.nabble.com/forum/ViewPost.jtp?post=15721104&framed=y
Change History (12)
comment:1 by , 16 years ago
Summary: | do feature do locking when features are changed in PostGIS tables → do locking when features are changed in PostGIS tables |
---|
comment:2 by , 16 years ago
Owner: | changed from | to
---|
follow-up: 4 comment:3 by , 16 years ago
Priority: | critical: causes crash or data corruption → minor: annoyance or enhancement |
---|
follow-up: 5 comment:4 by , 16 years ago
follow-up: 6 comment:5 by , 16 years ago
Must Fix for Release: | Yes → No |
---|---|
Priority: | minor: annoyance or enhancement → major: does not work as expected |
Type: | defect → enhancement |
Replying to msieczka:
Replying to jef:
Hi. Why do you think this is a minor issue? An application which allows for concurrent table use, should perform the necessary checks, in order to avoid data corruption, as a must, IMO.
I see it as enhancement, not as critical bug. There's no crash and no data corruption. For now who saves last wins. But you're right, that applies to Type ant not priority - both adjusted. BTW locking wouldn't address adding features, so without other means of coordination people might add the "same" features multiple times without noticing in the process, even if the cannot edit or delete existing features.
follow-up: 7 comment:6 by , 16 years ago
Replying to jef:
I see it as enhancement, not as critical bug. There's no crash and no data corruption. For now who saves last wins.
My point is that it never should have been implemented this way. I don't think that the fact it was designed wrong is an excuse for the data corruption it leads to.
But you're right, that applies to Type ant not priority - both adjusted. BTW locking wouldn't address adding features, so without other means of coordination people might add the "same" features multiple times without noticing in the process, even if the cannot edit or delete existing features.
I'm affraid I don't follow. You mean that locking the table would not prevent adding features by others?
follow-up: 8 comment:7 by , 16 years ago
Replying to msieczka:
My point is that it never should have been implemented this way. I don't think that the fact it was designed wrong is an excuse for the data corruption it leads to.
What data corruption?
BTW locking wouldn't address adding features, so without other means of coordination people might add the "same" features multiple times without noticing in the process, even if the cannot edit or delete existing features.
I'm affraid I don't follow. You mean that locking the table would not prevent adding features by others?
The current approach described in [2] above, would be locking features not tables. So once someone starts to modify or deletes a feature it is locked until commit, so nobody else can modify or delete concurrently. This would affect adding features at all.
follow-up: 10 comment:9 by , 16 years ago
Replying to jef:
What data corruption?
Eg. a polygon overlapping another one - each digitised by a different user, at the same time (ie. topology broken). Locking table write access for the user until he commits his changes would prevent such cases.
The current approach described in [2] above, would be locking features not tables.
Locking on rows is often not enough for PostGIS tables IMO. Single rows (features) of a PostGIS table are *spatially* inderdependent, unlike in case of non-spatial data. Locking per feature does not improve the situation much. With per-row locking you can still have many users digitising (ie. editing the table) at the same time, which will lead to a mess in the data eventually.
So once someone starts to modify or deletes a feature it is locked until commit, so nobody else can modify or delete concurrently. This would affect adding features at all.
I agree that locking per feature would be usefull in some setups. But I also believe that an exclusive lock on a whole table would be a safest approach. Probably it would be best to have this configurable in QGIS (as a global setting and a per-project override), so that the user could choose an optimal locking mechanism in his PostGIS usage scanario.
comment:10 by , 16 years ago
Replying to msieczka:
Replying to jef:
What data corruption?
Eg. a polygon overlapping another one - each digitised by a different user, at the same time (ie. topology broken). Locking table write access for the user until he
commits his changes would prevent such cases.
I see. Table locking has something to it in that case. What I don't like that it blocks users from editing features in different spatial areas.
But I wouldn't call that data corruption in our (trac) sense as it isn't qgis that corrupts the data, the user does. This could happen even without multiple users - give or take locking.
There should be a constraint on the database that prevents things like this from happening. In that case the commit of the changes would fail and you could align the geometry until the constraints are met and the commit succeeds. Those constraints would also apply to added features.
Without such constraints you would have to deal with the possibility of inconsistent topology anyway.
But I have to admit that I never have used topology constraints - if they exists at all in PostGIS, I imagine that they probably have a big performance hit to them.
Single rows (features) of a PostGIS table are *spatially* inderdependent, unlike in case of non-spatial data.
I tend to disagree. That also applies to non-spatial data, actually I think that that's the whole point of RDBMS with referential integrity and normalization etc.
BTW speaking of normalization a line topology would work in your case with row locking. Ok, probably not feasible as you loose the spot for the attributes that you used to assign to the polygons.
Probably it would be best to have this configurable in QGIS (as a global setting and a per-project override), so that the user could choose an optimal locking mechanism in his PostGIS usage scanario.
Ok.
comment:11 by , 16 years ago
Milestone: | Version 1.0.0 → Version 2.0.0 |
---|
comment:12 by , 16 years ago
A note on the meaning of 'major: does not work as expected'. This assignment should be used when a feature has been implemented (e.g. digitise a line) but some issue prevents the feature working. Minor priority should be used where by design or ommission a feature has not been implemented yet. In the case of this ticket I would contend the priority to be 'minor'. I'll leave the priority as is and let the chips fall where they may in this case....
Replying to jef:
Hi. Why do you think this is a minor issue? An application which allows for concurrent table use, should perform the necessary checks, in order to avoid data corruption, as a must, IMO.