In the FWIW dept.
I'm not all that surprised that full screen speeds up and enhances performance. Remember that the kernel now has to decide (via the OS obviously) which GPU to use. In full screen the "screen" and graphics underneath don't have to be refreshed, so the built in GPU (or gpu cause it's meant for really light drawing and refreshing) is for all intent and purpose "idling" while the GPU handles the foreground screen (i.e. full screen) It would be an interesting experiment to take one of the smaller or older Mac's and see if there's any enhancement in full s screen, I'd bet that full screen will show down the machines with only a single "lightweight" onboard "gpu".
If we really wanted to see how the onboard "gpu" and the big GPU world with each other, a movie wouldn't tell is too much. If we put in an app that does some REALLY intensive "Ray Tracing" or shading, that's where you will the the GPU really shine. The i5, i7, M series anyone's architecture will offload the massive and entire job of calculating shading, ray traces, lighting etc. It was built exactly for those kinds of tasks, the CPU is "almost" relieved of duty, now it just has to make calls to the GPU all the nVidea and ATI are BEASTS for number crunching, their heritage began a few decades ago when Intel was still using 80X86 and 80X87 (the ...87 was the "math part". Smart companies like Sun, Apollo, Silicon Graphics kinda said, hey wait, that's cool for the math, do you think we could also make a sibling chip for graphics, YES they can, the microcode inside the modern day GPU chips is daunting.
Just my $.02