Polynomial Equation Solver

Solve polynomial equations of any degree with real and complex roots. Works for linear, quadratic, cubic, quartic and higher-degree polynomials online.

Find every real and complex root of any polynomial equation, from a simple linear ax + b = 0 up through high-degree polynomials. Enter coefficients from the highest degree to the lowest; the solver returns all n roots (counting multiplicity) using the appropriate method for each degree.

What is a Polynomial Equation?

A polynomial equation is an equation of the form a_n x^n + a_{n-1} x^{n-1} + ... + a_1 x + a_0 = 0, where n is a non-negative integer (the degree of the polynomial), x is the unknown, and a_0 through a_n are the coefficients with a_n ≠ 0 (the "leading coefficient"). The Fundamental Theorem of Algebra guarantees that a polynomial of degree n has exactly n roots in the complex numbers, counting multiplicity — some may be real, some may be complex conjugate pairs, but the count always equals the degree.

anxn + an-1xn-1 + ... + a1x + a0 = 0

where n is a non-negative integer (the degree), and a₀, a₁, ..., aₙ are coefficients with aₙ ≠ 0.

How Does the Polynomial Solver Work?

Different methods are used depending on the degree:

  • Degree 1 (linear): direct solution x = -b/a, instant and exact
  • Degree 2 (quadratic): the quadratic formula gives exact closed-form roots, including the discriminant test for real vs complex
  • Degree 3 (cubic): Cardano's formula in principle, but for numerical stability the solver uses a hybrid of trigonometric substitution (Vieta's) and Newton iteration
  • Degree 4+ (quartic and beyond): numerical methods — primarily the Durand-Kerner algorithm, which finds all n roots simultaneously, with Newton-Raphson polishing for accuracy

Applications of Polynomial Equations

Polynomial equations appear across science, engineering, and mathematics:

  • Physics: motion equations (projectile y = -½gt² + v₀t + y₀ is a quadratic in time), oscillation modes, quantum mechanics eigenvalues
  • Engineering: characteristic equations for control systems and circuits (where the roots are the system's poles and decide stability)
  • Economics: cost and revenue functions, marginal-analysis derivatives, polynomial regression for trend fitting
  • Computer graphics: Bézier and spline curves, ray-surface intersection (a ray hitting a sphere is a quadratic, hitting a torus is a quartic)
  • Cryptography: polynomial arithmetic over finite fields underpins error-correcting codes (Reed-Solomon) and many post-quantum schemes

What does "degree" mean for a polynomial?

The degree is the highest power of x in the polynomial. For 3x² + 5x - 7, the highest term is 3x², so the degree is 2 (quadratic). For 2x⁵ - x + 1, the highest term is 2x⁵, so the degree is 5 (quintic). The degree determines several things: the number of roots (always exactly n, counting multiplicity), the maximum number of turning points in the graph (at most n-1), and the end behavior (for large |x|, the polynomial behaves like its highest-degree term). When you enter coefficients in this solver, you enter them in degree-descending order — first the coefficient of xⁿ, then xⁿ⁻¹, and so on, down to the constant term.

Why does the Fundamental Theorem of Algebra guarantee n roots?

The Fundamental Theorem of Algebra (FTA), proven by Carl Friedrich Gauss in 1799, states that every non-constant polynomial with complex coefficients has at least one complex root. By repeatedly factoring out roots, a degree-n polynomial can always be written as a product of n linear factors (x - r₁)(x - r₂)...(x - rₙ), so it has exactly n roots when counted with multiplicity. Some of those roots may be repeated (the polynomial x² - 2x + 1 = (x-1)² has the root 1 with multiplicity 2), and complex roots of polynomials with real coefficients always come in conjugate pairs (if 2+3i is a root, then 2-3i is also a root). So for a real-coefficient polynomial of odd degree, at least one root must be real.

Can a polynomial have only complex roots?

Yes, if the polynomial has real coefficients and even degree. The simplest example is x² + 1 = 0, whose roots are +i and -i — both purely imaginary, no real solutions. The pattern generalizes: x⁴ + 1 = 0 has four complex roots and no real ones, x⁶ + 1 = 0 has six complex roots and no real ones. Geometrically, this corresponds to a polynomial graph that never crosses the x-axis. For any real-coefficient polynomial of odd degree, at least one real root must exist (because the graph extends to +∞ on one end and -∞ on the other, so it must cross zero somewhere). This is why every cubic equation has at least one real solution — useful to know when you encounter problems with cubics in physics or engineering.

What's the difference between Cardano's formula and numerical methods?

Cardano's formula (for cubics) and Ferrari's formula (for quartics) are closed-form algebraic expressions that give exact roots in terms of the coefficients, the way the quadratic formula does. They exist but are hideously complex and numerically unstable — Cardano's formula can produce complex intermediate values even when all final roots are real (the so-called casus irreducibilis), forcing trigonometric tricks to recover sensible numbers. For degree 5 and higher, the Abel-Ruffini theorem (1824) proves that no general closed-form solution using radicals exists at all. Modern solvers therefore use iterative numerical methods — Newton-Raphson, Durand-Kerner, Bairstow — which converge to roots to arbitrary precision in finite time. Trade-off: numerical methods give arbitrarily accurate decimals but not symbolic forms. For symbolic results, use a CAS like Wolfram or Maxima.

What is the Durand-Kerner method this solver uses?

Durand-Kerner (also called Weierstrass) is an iterative method that approximates all n roots of a polynomial simultaneously, instead of finding them one at a time. It starts with n distinct guesses spread around the complex plane (typically on a circle whose radius is bounded by the polynomial's coefficients via Cauchy's bound) and updates each guess using the formula r_i ← r_i - p(r_i) / ∏(r_i - r_j) over all j ≠ i. The method has quadratic convergence near the roots — meaning the number of correct digits roughly doubles each iteration — and unlike Newton-Raphson, it finds all roots in parallel without needing to deflate the polynomial after each root is found. It is the workhorse of most production polynomial root-finders, including MATLAB's roots(), NumPy's roots, and this tool.

Why does the solver sometimes give very small imaginary parts for roots that should be real?

Round-off error in floating point arithmetic. A polynomial like x² - 4x + 4 = (x-2)² should give the double root x = 2, but a numerical solver may compute it as x = 2.0000000001 + 0.0000000003i — the tiny imaginary part is purely a rounding artifact, not a real complex component. The solver applies a small threshold (typically 1e-10 relative to the coefficient magnitude) and rounds those near-zero imaginary parts to zero before displaying. If you see a result like "0.000000001 + 0.000000001i", treat it as zero. For exact roots, you would need a symbolic algebra system that works in exact arithmetic, but the floating-point answers here are accurate to about 12-14 decimal places, more than enough for any engineering or physics use.

What if the leading coefficient is zero?

Then the polynomial is effectively lower degree than what you entered. If you set up a degree-3 polynomial as 0x³ + 2x² - x + 1 = 0, the leading 0 means it is actually a quadratic 2x² - x + 1 = 0, not a cubic. The solver detects this case and refuses to solve, displaying an error — because solving a "degree-3 with leading coefficient 0" is ambiguous (some libraries treat it as the lower-degree polynomial, others as undefined). Drop the leading zero coefficient and reduce the degree to match the actual highest nonzero term.

Can this solve polynomials of degree 100 or higher?

In principle yes, but with caveats. The Durand-Kerner method converges for any degree, and modern computers handle degree 100 polynomial root-finding in milliseconds. The practical problem is numerical conditioning: high-degree polynomials with closely-spaced or repeated roots become exquisitely sensitive to coefficient round-off — the Wilkinson polynomial (degree 20 with roots at 1, 2, ..., 20) is the famous teaching example, where a 10⁻⁹ change in one coefficient moves several roots by more than 1.0. For polynomials above degree 50 with sensitive coefficients, you may see large errors despite the algorithm converging. If you need that high a degree for serious work, switch to a method designed for it — companion matrix eigenvalue methods, or symbolic root-finding in exact arithmetic. For typical use (engineering, schoolwork, physics homework up to maybe degree 10), this solver is plenty.