pwtools.num.polyfit¶
- pwtools.num.polyfit(points, values, deg, scale=True, scale_vand=False)[source]¶
Fit nd polynomial of dregree deg. The dimension is
points.shape[1]
.- Parameters:
points (nd array (npoints,ndim)) – npoints points in ndim-space, to be fitted by a ndim polynomial f(x0,x1,…,x{ndim-1}).
values (1d array)
deg (int) – Degree of the poly (e.g. 3 for cubic).
scale (bool, optional) – Scale points and values to unity internally before fitting.
fit['coeffs']
are for scaled data.polyval
handles that transparently.scale_vand (bool, optional) – scale Vandermonde matrix as in numpy.polyfit (devide by column norms to improve condition number)
- Returns:
fit – {coeffs, deg, pscale, vscale, pmin, vmin} where coeffs = 1d array ((deg+1)**ndim,) with poly coefficients and *min and *scale are for data scaling. Input for polyval().
- Return type:
dict
Notes
scale: numpy.polyfit does only scale_vand by default, which seems to be enough for most real world data. The new np.polynomial.Polynomial.fit now does the equivalent of what we do here with scale, but they do it only for `points`, not `values`. They map to [-1,1], we use [0,1].
In most tests so far, scale_vand and scale have pretty much the same effect: enable fitting data with very different scales on x and y.
Because
fit['coeffs']
are w.r.t. scaled data, you cannot compare them to the result of np.polyfit directly. Only with scale=False you can compare the coeffs, which should be the same up to numerical noise. However, you may simply compare the resulting fits, evaluated at the same points.