Stereo audio has completely separate audio signals for the left and the right channels. One of the reasons to use separate channels is to allow creation of a "sound stage". If a track is well mastered/mixed and you have good speakers/headphones you should be able to pick out each performer's location as if they were on a stage in front of you. For example you should be able to discern where the lead guitarist is standing vs where the bass guitarist is standing. This is done by controlling the volume and phase of the sound in each channel for each performer. Generally the vocalist is placed dead center of the "sound stage" and the way to achieve that effect with stereo audio is to feed the exact same signal(in both phase and volume) into both the left and right channels. The 'mix' is just a term used for the way in which the various recordings of each performer are combined to create the final product.
There are a lot of fun things you can do acoustically with stereo sound. Some phasing effects can actually be pretty 'tripy' for lack of a better word.
Interesting. Any comment on how they get the "front of stage" and "back of stage" effect? I used to not believe this was possible, until I listened to a good recording on good speakers and could place the bass player clearly to the left and behind the singer.
Some depth can be modeled with phase control. By controlling the phase of a signal compared to another you can create a perceived time delay which makes it appear as though one performer is behind another. I'd also add that this is much easier to do with low frequencies (bass player) due to the wavelength being so long. At 49hz (G1 on a bass guitar) the acoustical wavelength is ~6.9 meters which means even small phase shifts (time delays) can create meaningful depth. This approach is pretty useless at higher frequencies as the time delay (and depth) achievable gets very small.
There are other ways to do this via more complex 3D acoustical modeling. Mostly focusing on modeling of reverberation effects but you don't see that much in music recording. It is used a lot in games though.
In addition to what mbell said, another simple approach for making one sound seem further away than another is changing the balance of the direct vs. reverberant sound (more distant == more reverberant sound, lower direct sound volume). This would happen naturally in a stereo recording with two microphones, and can also be done artificially.
Predelay time on the input to the reverb. The longer the gap between the direct and reflected sound, the closer to you the direct sound will seem to be.
What do you mean by 'dead center in the mix'?