why are so many problems linear and how would one solve nonlinear problems?
i am taking a deep learning in python class this semester and we are basically doing linear algebra.
Last lecture we "invented" linear regression with gradient descent (did least squares the lecture before) from scratch where we talked about defining hypotheses, Loss function, cost function etc.
I got 2 Questions:
How does it come that many problems can be looked at as a linear Problem and are basically "just about trying to find a solution for the Equation Ax = b" ? Doing that can be done by stuff like least squares or training a neutral network to find one.
I feel like, "in the real world", most problems are not linear at all. How would one tackle those problems as linear algebra does only apply to linear functions?
Youâre right: itâs quite an assumption that the world is so simple that it can be modeled with lines, planes, and hyperplanes. But, theâ¦
STONE-WEIERSTRASS THEOREM
â¦says that, technicalities aside, âdecentâ functions can be approximated arbitrarily well by polynomials. If youâve gone far enough in linear algebra, you know that complicated polynomials like $wxz-x^7y^9-wz^2+9w^5x^3yz^8$ can be viewed as linear combinations of basis elements of a vector space. This gives a way to express that polynomial as a dot product of a vector of basis elements and a vector of weights. Across multiple data points, that becomes the familiar $X\beta$ from linear regression.
This is not limited to polynomials. Any linear combination (weighted sum/difference) of functions of the original data can be represented as a dot product. Fourier series can be represented this way to obtain periodicity in the regression fit. Splines can model curvature and can have advantages over polynomials in doing so. You can interact functions of single variables with something like $\sin(x_1)\cos(x_2)$.
Overall, that seemingly simple formulation of linear regression as $X\beta$ can model an enormous amount of complicated behavior.
@EdM is, of course, correct. And those transformations are very, very flexible.
But let's take a really simple case. One independent variable, one dependent one. And a straight line for the fit (no transformations).
First, it's not a dichotomy between cases where this fits and where it doesn't. Sometimes, this simple straight line is a very good fit to the data; a lot of physics problems are like this. Sometimes this straight line is a terrible fit: Take anything which is sinusoidal, just as one case. If $y = sin(x)$ then a straight line will not work at all.
More often, though, it's sort of an OK fit. Remember, as George Box said "all models are wrong, but some are useful." Even in those physics problems the straight line will ignore some issues (e.g. friction, air resistance, whatever). In other cases, there will be a lot of error in the model, and a better fit would be obtained with a more complex model.
A lot of the art and science of data analysis is figuring out how much complexity is "worth". Should we model a transformation? If so, just a quadratic? Or a spline? Perhaps a fractional polynomial. Maybe we need control variables. Moderators. Mediators. Etc.
Or maybe the straight line is enough.
In my view, this isn't a purely statistical question. We have to consider the context. Again, for me, this is what made being a statistical consultant fun.
As for how one tackles such problems, well, what I do is try to figure out what makes sense. Computers make this playing easy. But I try to be careful to not torture the data too much -- and there are ways to avoid that, too.