For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points.[6]. In the simple case of a function of one variable, say, h(x), we can solve an equation of the form h(x) = c for some constant c by considering what is known as the inverse function of h. Given a function h : A → B, the inverse function, denoted h−1 and defined as h−1 : B → A, is a function such that, Now, if we apply the inverse function to both sides of h(x) = c, where c is a constant value in B, we obtain. One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. − Types of problems with existing dedicated solvers include: Linear and non-linear equations. Or x and y can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is (x, y) = (a + 1, a), where the variable a may take any value. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution. = When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set.
Thus the solution set may be a plane, a line, a single point, or the empty set. is the inverse of A. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. {\displaystyle \mathbf {w} }

, the entire solution set can also be expressed in matrix form. However, it is common to reserve x, y, z, ... to denote the unknowns, and to use a, b, c, ... to denote the known variables, which are often called parameters. Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. It was the first computer program which separated its knowledge of problems (in the form of domain rules) from its strategy of how to solve problems (as a general search engine). 2 {\displaystyle x} A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. where f is a function, x1, ..., xn are the unknowns, and c is a constant. [1][2][3][4][5] For example, is a system of three equations in the three variables x, y, z. b This is typically the case when considering polynomial equations, such as quadratic equations.

− If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +(I-A^{-1}A)\mathbf {w} =A^{-1}\mathbf {b} +(I-I)\mathbf {w} =A^{-1}\mathbf {b} } ) If this condition does not hold, the equation system is inconsistent and has no solution. A − {\displaystyle (3,\,-2,\,6)} are the unknowns, with unknowns x, y and z, can be put in the above form by subtracting 21z from both sides of the equation, to obtain. This is an example of equivalence in a system of linear equations. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.

Numerical solutions to a homogeneous system can be found with a singular value decomposition. b . This induces an exponential computational time that dramatically limits their usability. The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all equations or inequalities. , ), A general system of m linear equations with n unknowns can be written as. , It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods. (
This solution set has the following additional properties: These are exactly the properties required for the solution set to be a linear subspace of Rn. , . 1 [citation needed] Solutions of differential equations can be implicit or explicit. A linear system may behave in any one of three possible ways: For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. The number of vectors in a basis for the span is now expressed as the rank of the matrix. b There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss-Jordan elimination. This occurs if and only if the vector b lies in the image of the linear transformation A. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory.