In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name.
Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988,[1][2] which stemmed from Michael J. D. Powell's seminal research from 1977.[3][4][5]
RBFs are also used as a kernel in support vector classification.[6] The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.[7][8]
Definition
A radial function is a function . When paired with a norm on a vector space , a function of the form is said to be a radial kernel centered at . A radial function and the associated radial kernels are said to be radial basis functions if, for any finite set of nodes , all of the following conditions are true:
The kernels are linearly independent (for example in is not a radial basis function)
Commonly used types of radial basis functions include (writing and using to indicate a shape parameter that can be used to scale the input of the radial kernel[11]):
Infinitely Smooth RBFs
These radial basis functions are from and are strictly positive definite functions[12] that require tuning a shape parameter
where the approximating function is represented as a sum of radial basis functions, each associated with a different center , and weighted by an appropriate coefficient The weights can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights .
can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number of radial basis functions is used.
The approximant is differentiable with respect to the weights . The weights could thus be learned using any of the standard iterative methods for neural networks.
Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.[citation needed]
Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs). This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method. It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation. The function values at points in the domain are approximated by the linear combination of RBFs:
11
The derivatives are approximated as such:
12
where are the number of points in the discretized domain, the dimension of the domain and the scalar coefficients that are unchanged by the differential operator.[13]
Different numerical methods based on Radial Basis Functions were developed thereafter. Some methods are the RBF-FD method,[14][15] the RBF-QR method[16] and the RBF-PUM method.[17]
^Broomhead & Lowe 1988, p. 347: "We would like to thank Professor M.J.D. Powell at the Department of Applied Mathematics and Theoretical Physics at Cambridge University for providing the initial stimulus for this work."
^Buhmann, Martin Dietrich (2003). Radial basis functions : theory and implementations. Cambridge University Press. ISBN978-0511040207. OCLC56352083.
^Biancolini, Marco Evangelos (2018). Fast radial basis functions for engineering applications. Springer International Publishing. ISBN9783319750118. OCLC1030746230.
^Fasshauer, Gregory E. (2007). Meshfree Approximation Methods with MATLAB. Singapore: World Scientific Publishing Co. Pte. Ltd. pp. 17–25. ISBN9789812706331.
^Wendland, Holger (2005). Scattered Data Approximation. Cambridge: Cambridge University Press. pp. 11, 18–23, 64–66. ISBN0521843359.
^Fasshauer, Gregory E. (2007). Meshfree Approximation Methods with MATLAB. Singapore: World Scientific Publishing Co. Pte. Ltd. p. 37. ISBN9789812706331.
^Fasshauer, Gregory E. (2007). Meshfree Approximation Methods with MATLAB. Singapore: World Scientific Publishing Co. Pte. Ltd. pp. 37–45. ISBN9789812706331.
Sirayanone, S., 1988, Comparative studies of kriging, multiquadric-biharmonic, and other methods for solving mineral resource problems, PhD. Dissertation, Dept. of Earth Sciences, Iowa State University, Ames, Iowa.
Sirayanone, S.; Hardy, R.L. (1995). "The Multiquadric-biharmonic Method as Used for Mineral Resources, Meteorological, and Other Applications". Journal of Applied Sciences and Computations. 1: 437–475.