Here is a summary of all these concepts. Most the concepts are from Wikipedia. A larger figure is shown below:
- In mathematics, the gradient is a multi-variable generalization of the derivative. While a derivative can be defined on functions of a single variable, for functions of several variables, the gradient takes its place. The gradient is a vector-valued function, as opposed to a derivative, which is scalar-valued. The gradient (or gradient vector field) of a scalar function f(x1, x2, x3,… xn) is denoted ∇f or ∇→f where ∇ (the nabla symbol) denotes the vector differential operator, del. The notation grad f is also commonly used for the gradient. The gradient of f is defined as the unique vector field whose dot product with any unit vector v at each point x is the directional derivative of f along v.
- In physical terms, the divergence of a three-dimensional vector field is the extent to which the vector field flow behaves like a source at a given point. It is a local measure of its “outgoingness” – the extent to which there is more of some quantity exiting an infinitesimal region of space than entering it. If the divergence is nonzero at some point then there is compression or expansion at that point. (Note that we are imagining the vector field to be like the velocity vector field of a fluid (in motion) when we use the terms flow and so on.) Let x, y, z be a system of Cartesian coordinates in 3-dimensional Euclidean space, and let i, j, k be the corresponding basis of unit vectors. The divergence of a continuously differentiable vector field F = Ui + Vj + Wk is defined as the scalar-valued function
- In mathematics and physics, a scalar field associates a scalar value to every point in a space – possibly physical space. The scalar may either be a (dimensionless) mathematical number or a physical quantity. In a physical context, scalar fields are required to be independent of the choice of reference frame, meaning that any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.
- In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols ∇·∇, ∇2, or Δ. The Laplacian Δf(p) of a function f at a point p, up to a constant depending on the dimension, is the rate at which the average value of f over spheres centered at p deviates from f(p) as the radius of the sphere grows. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems such as cylindrical and spherical coordinates, the Laplacian also has a useful form.
- In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right) of A. The trace is a linear mapping.
- In mathematics, the field trace is a particular function defined with respect to a finite field extension L/K, which is a K-linear map from L onto K.
- In fluid dynamics, circulation is the line integral around a closed curve of the velocity field. Circulation is normally denoted Γ (Greek uppercase gamma). Circulation was first used independently by Frederick Lanchester, Wilhelm Kutta, and Nikolai Zhukovsky.
- In vector calculus, the Jacobian matrix (/dʒəˈkoʊbiən/, /dʒɪ-, jɪ-/) is the matrix of all first-order partial derivatives of a vector-valued function. When the matrix is a square matrix, both the matrix and its determinant are referred to as the Jacobian in literature. The Jacobian generalizes the gradient of a scalar-valued function of multiple variables, which itself generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian for a scalar-valued multivariate function is the gradient and that of a scalar-valued function of single variable is simply its derivative. The Jacobian can also be thought of as describing the amount of “stretching”, “rotating” or “transforming” that a transformation imposes locally. For example, if (x′, y′) = f(x, y) is used to transform an image, the Jacobian Jf(x, y), describes how the image in the neighborhood of (x, y) is transformed.
- In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term “functional determinants”. The Hessian matrix of a convex function is positive semi-definite. Refining this property allows us to test if a critical point x is a local maximum, local minimum, or a saddle point, as follows. If the Hessian is positive definite at x, then f attains an isolated local minimum at x. If the Hessian is negative definite at x, then f attains an isolated local maximum at x. If the Hessian has both positive and negative eigenvalues then x is a saddle point for f. Otherwise the test is inconclusive. This implies that, at a local minimum (respectively, a local maximum), the Hessian is positive-semi-definite (respectively, negative semi-definite).