NoiseRobustDifferentiation.jl
Julia reimplementation of Total Variation Regularized Numerical Differentiation (TVDiff).
Based on Rick Chartrand's original Matlab code and Simone Sturniolo's Python reimplementation.
Installation
To install this package and its dependencies, open the Julia REPL and run
julia> ]add NoiseRobustDifferentiation
Julia 1.5 is required.
Functions
NoiseRobustDifferentiation.tvdiff
— Functiontvdiff(data::AbstractVector, iter::Integer, α::Real; kwargs...)
Arguments
data::AbstractVector
: Vector of data to be differentiated.iter::Integer
: Number of iterations to run the main loop. A stopping condition based on the norm of the gradient vectorg
below would be an easy modification.α::Real
: Regularization parameter. This is the main parameter to fiddle with. Start by varying by orders of magnitude until reasonable results are obtained. A value to the nearest power of 10 is usally adequate. Higher values increase regularization strenght and improve conditioning.
Keywords
u_0::AbstractVector
: Initialization of the iteration. Default value is the naive derivative (without scaling), of appropriate length (this being different for the two methods). Although the solution is theoretically independent of the intialization, a poor choice can exacerbate conditioning issues when the linear system is solved.scale::String
: Scale of dataset,"large"
or"small"
(case insensitive). Default is"small"
."small"
has somewhat better boundary behavior, but becomes unwieldly for very large datasets."large"
has simpler numerics but is more efficient for large-scale problems."large"
is more readily modified for higher-order derivatives, since the implicit differentiation matrix is square.ε::Real
: Parameter for avoiding division by zero. Default value is1e-6
. Results should not be very sensitive to the value. Larger values improve conditioning and therefore speed, while smaller values give more accurate results with sharper jumps.dx::Real
: Grid spacing, used in the definition of the derivative operators. Default is1 / length(data)
.precond::String
: Select the preconditioner for the conjugate gradient method. Default is"none"
.scale = "small"
: While in principleprecond="simple"
should speed things up, sometimes the preconditioner can cause convergence problems instead, and should be left to"none"
.scale = "large"
: The improved preconditioners are one of the main features of the algorithm, therefore using the default"none"
is discouraged. Currently,"diagonal"
,"amg_rs"
,"amg_sa"
,"cholesky"
are available.
diff_kernel::String
: Kernel to use in the integral to smooth the derivative. By default it is set to"abs"
, the absolute value $|u'|$. However, it can be changed to"square"
, the square value $(u')^2$. The latter produces smoother derivatives, whereas the absolute values tends to make them more blocky.cg_tol::Real
: Relative tolerance used in conjugate gradient method. Default is1e-6
.cg_maxiter::Int
: Maximum number of iterations to use in conjugate gradient optimisation. Default is100
.show_diagn::Bool
: Flag whether to display diagnostics at each iteration. Default isfalse
. Useful for diagnosing preconditioning problems. When tolerance is not met, an early iterate being best is more worrying than a large relative residual.
Output
u
: Estimate of the regularized derivative of data withlength(u) = length(data)
.
Differences to MATLAB Code
Conjugate gradient method
The original code uses MATLAB's inbuilt function pcg()
, which implements the preconditioned conjugate gradients method (PCG). This code uses the conjugate gradients method (CG) from IterativeSolvers.jl. Refer to the implementation details for a brief discussion of differences between both methods.
Since the CG method from IterativeSolvers.jl allows for preconditioners, most of the options from Preconditioners.jl are implemented using default parameters.
New parameters
precond
: Method used for preconditioning.cg_tol
: Tolerance used in conjugate gradient method.
Other differences
diag_flag
has been renamed toshow_diagn
- removed plotting flag
Citation
Please cite the following paper if you use this code in published work:
Rick Chartrand, "Numerical differentiation of noisy, nonsmooth data," ISRN Applied Mathematics, Vol. 2011, Article ID 164564, 2011.