#929 closed enhancement (fixed)
g.transform: dump coeffs and transform sparse points
Reported by: | hamish | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | 7.0.0 |
Component: | Imagery | Version: | svn-trunk |
Keywords: | g.transform | Cc: | |
CPU: | All | Platform: | All |
Description
Hi,
the attached patch adds a new flag to dump transform coeffs, transform arbitrary input coords to the other coord system, and a adds a flag to tell it to do these things in reverse.
it also fixes a little continue
bug if you selected format="".
( based on Glynn's tips on the -dev ML.
)
I'm not decided what to do about reading from stdin. I suspect I'll have to drop the east_north=x1,y1,x2,y2,xn,yn,... option in favor of an input file containing points to be transformed because the parser insists on two values, and "-,-" if given is read as -0,-0. Well, I guess I could have strcmp() test for "-,-" but it's a bit silly. I'd rater not add another flag to direct the input. any ideas?
FIXME: if using the reverse flag check the TARGET location's G_projection() type and use it for G_scan_easting() instead of the current location's type.
Hamish
Attachments (1)
Change History (7)
Changed 10 years ago by
Attachment: | gtrans_pts_dmp.diff added |
---|
comment:1 Changed 10 years ago by
(the original idea here was to find the bounding box coords in the target location given the current set of GCPs and region settings)
comment:2 Changed 10 years ago by
Replying to hamish:
I'm not decided what to do about reading from stdin. I suspect I'll have to drop the east_north=x1,y1,x2,y2,xn,yn,... option in favor of an input file containing points to be transformed because the parser insists on two values, and "-,-" if given is read as -0,-0. Well, I guess I could have strcmp() test for "-,-" but it's a bit silly. I'd rather not add another flag to direct the input. any ideas?
Don't try and make one option do different things; use a separate option for reading from a file (use file=- for stdin rather than a flag).
comment:3 Changed 10 years ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
comment:4 follow-up: 5 Changed 10 years ago by
I have been pointed to this post by the author: http://www.cruisersforum.com/forums/f134/charts-31346-36.html#post398493
with GPL3 code linked at the end of the post. (remove the .doc; stupid webCMS...)
quote:
- First for all ref points it converts ref lat/lon in ref E/N using the exact projection formulas (Mercator and Transverse-Mercator supported)
- Second it tries to approximate the ref X/Y to E/N by polynomial approx. It tests 23 polynomials to find the best one (minimum error, well conditioning). It works with at least 2 ref points.
- Third it uses this couple of transformations (lat/lon->E/N->X/Y) to recalculate 36 new ref points with a disposal that is always well-conditioned so it can calculate the final third order polys that goes directly form lat/lon to X/Y and viceversa"
/endquote
the tests are:
/* PolyEleEval - by Marco Certelli - License: GPL - Version 01beta This function evaluates the elements of the polynomial that approximates a set of ref points placed in the plane xy. This is specific for XY<->EN transformations needed in Chart Georeferencing. One approximating poly is defined as: N = a[0] + a[1]X + a[2]Y + a[3]X^2 + a[4]XY + a[5]Y^2 + a[6]X^3 + a[7]X^2Y + a[8]XY^2 + a[9]Y^3 E = b[0] + b[1]X + b[2]Y + b[3]X^2 + b[4]XY + b[5]Y^2 + b[6]X^3 + b[7]X^2Y + b[8]XY^2 + b[9]Y^3 This function takes the int nref, representing number of available ref points, 2 vectors of double x[] and y[], representing the ref points position (either x, y or lat, lon), and returns a vector of int e[10] which i-th value [1 or 0] states if the relative polynomial coefficients a[i] is well or ill-conditioned. The x[] and y[] input values shall be between 0.0 and 1.0 (the caller shall scale data). The function itself returns an int 1 if an error occurred. */ #define N_POLY_TEST 23 { int i,j,n,t,ne,optt; double *A[2*MAX_REF_POINTS+2], B1[2*MAX_REF_POINTS+2], B2[MAX_REF_POINTS], C1[20], C2[10]; double eqmN,eqmE,sN,sE,rcond,sC,prjerror,ill,opt,minopt; // n X Y X X Y X X X Y a // X Y Y X X Y Y = // X Y Y Y b int test[N_POLY_TEST][11] = {3,1,1,0,0,0,0,0,0,0,1, // 0 - P=C + X + Y & (Cxx=Cyy, Cxy=-Cyx) 3,1,1,0,0,0,0,0,0,0,0, // 1 - P=C + X + Y 4,1,1,1,0,0,0,0,0,0,1, // 2 - P=C + X + Y + X^2 & (Cxx=Cyy, Cxy=-Cyx) 4,1,1,0,1,0,0,0,0,0,1, // 3 - P=C + X + Y + XY & (Cxx=Cyy, Cxy=-Cyx) 4,1,1,0,0,1,0,0,0,0,1, // 4 - P=C + X + Y + Y^2 & (Cxx=Cyy, Cxy=-Cyx) 4,1,1,1,0,0,0,0,0,0,0, // 5 - P=C + X + Y + X^2 4,1,1,0,1,0,0,0,0,0,0, // 6 - P=C + X + Y + XY 4,1,1,0,0,1,0,0,0,0,0, // 7 - P=C + X + Y + Y^2 5,1,1,1,1,0,0,0,0,0,0, // 8 - P=C + X + Y + X^2 + XY 5,1,1,0,1,1,0,0,0,0,0, // 9 - P=C + X + Y + XY + Y^2 5,1,1,1,0,1,0,0,0,0,0, // 10 - P=C + X + Y + X^2 + Y^2 6,1,1,1,1,1,0,0,0,0,0, // 11 - P=C + X + Y + X^2 + XY + Y^2 6,1,1,1,1,0,0,1,0,0,0, // 12 - P=C + X + Y + X^2 + XY + X^2Y 6,1,1,0,1,1,0,0,1,0,0, // 13 - P=C + X + Y + XY + Y^2+ Y^2X 7,1,1,1,1,1,1,0,0,0,0, // 14 - P=C + X + Y + X^2 + XY + Y^2 + X^3 7,1,1,1,1,1,0,1,0,0,0, // 15 - P=C + X + Y + X^2 + XY + Y^2 + X^2Y 7,1,1,1,1,1,0,0,1,0,0, // 16 - P=C + X + Y + X^2 + XY + Y^2 + XY^2 7,1,1,1,1,1,0,0,0,1,0, // 17 - P=C + X + Y + X^2 + XY + Y^2 + Y^3 8,1,1,1,1,1,0,1,1,0,0, // 18 - P=C + X + Y + X^2 + XY + Y^2 + X^2Y+ XY^2 8,1,1,1,1,1,1,0,0,1,0, // 19 - P=C + X + Y + X^2 + XY + Y^2 + X^3 + Y^3 9,1,1,1,1,1,1,1,1,0,0, // 20 - P=C + X + Y + X^2 + XY + Y^2 + X^3 + X^2Y+ XY^2 9,1,1,1,1,1,0,1,1,1,0, // 21 - P=C + X + Y + X^2 + XY + Y^2 + X^2Y+ XY^2+ Y^3 10,1,1,1,1,1,1,1,1,1,0};// 22 - P=C + X + Y + X^2 + XY + Y^2 + X^3 + X^2Y+ XY^2 + Y^3 int element[N_POLY_TEST][10];
I am wondering if this is useful for us but I am reminded that in stats that the simplest model fit that explains the data is usually the best one to use. You can use high nth-order polynomials to exactly fit the data, but what you are really doing is fitting the noise.
So for example I have a good & undistorted scan of a printed map which I wish to georectify. I can get a smaller RMS by using order=3, but that's just fitting my measurement error better. It's more appropriate to use order=2 which will average out my measurement error instead of modelling the noise -- which is what I really want.
comments?
Hamish
comment:5 follow-up: 6 Changed 10 years ago by
Replying to hamish:
I am wondering if this is useful for us but I am reminded that in stats that the simplest model fit that explains the data is usually the best one to use. You can use high nth-order polynomials to exactly fit the data, but what you are really doing is fitting the noise.
I would expect that you're typically trying to find a suitable approximation to a complex analytic function (i.e. a typical cartographic or geometric projection).
For many projections, a quadratic polynomial will be a poor fit. The main reason for using cubic functions for interpolation is that you can specify both the value and first derivative at each end (4 degrees of freedom).
If you consider approximating sin(x) symmetrically about zero: you can get a reasonable approximation with a cubic function, but the optimal quadratic approximation will actually be linear.
Note that the most complex case in the above code is the same as g.transform with order=3, i.e. generalised cubic (rather than bicubic).
comment:6 Changed 10 years ago by
Replying to glynn:
I would expect that you're typically trying to find a suitable approximation to a complex analytic function (i.e. a typical cartographic or geometric projection).
For many projections, a quadratic polynomial will be a poor fit. The main reason for using cubic functions for interpolation is that you can specify both the value and first derivative at each end (4 degrees of freedom).
Sure, if I were reprojecting or suspected the scan was distorted order=3 would be appropriate-
I guess I should have mentioned that I always try very hard to match the target location's projection to the scanned-map's native projection. So I'm using i.rectify mainly as a georeferencing tool, not a reprojection tool. (so rotation+scale but no warping)
e.g. if the quality-scan is a nautical chart in Mercator projection, with a known +lat_ts and grid lines in lat/lon, I'll put GCPs on all the grid line confluences* and enter the coords as degrees, and finally run a little script to reproject the target POINTS to the custom merc projection (see wiki addons).
Then run r.proj to pull it from the custom merc location into whatever working location it is needed.
Probably it is overkill, but trying to do the right thing..
Hamish
[*] totally unrelated but interesting: http://confluence.org/
simplify for loops