Yes, I think Gaussian elimination is the way to go. It works like this (sorry if I am teaching granny to suck eggs):
Suppose I have this pair of simultaneous equations:
It's clear I can add the two equations to eliminate y and solve for x ... just add the corresponding terms to get this:
Simplify to get:
Now that we know x, we can put its value back into either of the original equations to get
.
Problem solved. Lets try a more complicated one with three simultaneous equations:
We always look for a way to eliminate one of the variables to get something simpler. In this case, looking at the x's I can see that if I take the first one and subtract twice the second, I can eliminate x. Remember, whatever I do to the x's must also be done to the other terms, so I have to take the first and subtract twice the second, like this:
Then I simplify to get:
Ok, that gives me one equation with two variables. I'll need another one if I want to be able to solve it. So I go back to the original three equations and look at the last two. I see that if I take -3 times the second and add the third, the x's will disappear again:
... which simplifies to:
So now I have two simultaneous equations in two variables:
Using the same old trick, this time I take -5 times the first, and add the second, and -- skipping to the simplification -- I get:
Finally, I have a value for one of my variables. Now I have to go all the way back to my original three equation, and replace all the y's with 3. With the y's gone I will end up with three simultaneous equations in x and z, and I can take any pair of them to solve for either x or z. Armed with
that value, I go back once more, substitute in, and get the last variable. In case you wondered, the correct values are:
Whew! All very tedious, but you just follow fairly simple rules to get your answer. Let's take a look at this same problem a slightly different way. The original three equations each had an x term, a y term, and a z term, and a sum on the right hand side. Did it really matter that the variables were called x, y and z? No, it would have worked out exactly the same if it was p, q, and r, or x1, x2 and x3, or anything else. The variable names don't really matter, as long as we know which positions correspond to which variables.
So here's another formulation of the same problem, with the variable names left out, and just the co-efficients retained:
Remember, the meaning of each of the three rows is still "some number of x plus some number of y plus some number of z = some total". I've also just added a label to each equation just to identify it, so the first equation is called R1. (Quick note here that if, say, z didn't appear at all in one of the equations, I could just put a zero in the corresponding column to mean "zero z's"). We'll just saying passing, although it's not important, that the above arrangement is called an
augmented matrix.
Ok, where's all this getting us? Well, let's follow the steps that we did with the three original equations, using our new notation:
Hopefully you can see what's going on here. The middle line shows that I took the first equation and subtracted twice the original second equation. That gives me a new equation which I've given a new label to: R2a. Similarly with the third line, it's just the exact same as the second substitution step I did with the original equations.
Ok, just to show there's more than one way to skin a cat, this time I'm going to eliminate that -13y as follows:
So now I've taken 13 times R2a and subtract 6 times the original R3a to give a new R3b. From here I pretty much have a value for z, since I know 17z = 85, so z - 5. Now I can substitute that back into the second last equation above (R2a) to get -6y = -18, so y=3. And armed with both y and z, I can go back to R1 and find that x = 2.
Note the final form of the matrix above. The leading diagonal is the line from the top left of the co-efficients to the bottom right, and it splits the matrix in half. Everything to the lower left of the leading diagonal is zero. That's called an upper triangular matrix, and once we have one we have solved for at least one of our variables. That is always our aim. At each step we try to eliminate one co-efficient and make it zero. That's called
Gaussian elimination. Once we have our upper triangular matrix we solve for all our variables as just noted, using
back substitution. By tediously following the same simple rules we can solve large systems of simultaneous equations. Note, though, that not every system necessarily
has a solution. But let's skip lightly over that for now.
Now, you can easily see how this solves TarfHead's puzzle, right?
If you can, you're a better nerd than me, because we haven't even touched on that yet. We just done a few preliminaries. But we'll get there in the next post.