True Multi-Core Optimization May Require Windows Rework
Each successive OS release from Microsoft has improved multicore scaling but bottlenecks regularly occur as multiple applications jockey for position. Priority scheduling helps a certain amount, but it's not transparent to the end user or guaranteed to improve performance. One of the reasons for poor multicore performance is that the operating system doesn't understand which programs the user wants prioritized (and which he doesn't). If the OS is busy running a virus scan while the user is busy opening documents or copying files, system responsiveness drops like a rock no matter how many cores you have.
Coming soon to a desktop near you!
We've known since Day 1 of the multicore era that parallel programming was much more difficult than conventional single-core coding. The general assumption back then was that new compilers and methodologies would appear and solve the problem in a few years. Five years later, the situation is generally better, but no one has discovered a magic key that unlocks parallel scalability. Part of the problem here is that while we may be five years into the multicore era, programmers are still searching for ways to extract greater parallelism from software with its roots in serial execution. Instead of continuing to hammer a square peg into a round hole, Probert suggests another sort of solution.
Probert argues that the best way to solve the problem is to build a new OS from the ground up using a different set of assumptions. If the OS 'knew' it had multiple cores at its disposal by default, programs could be assigned to specific processors and the OS would no longer have to juggle the various cores to ensure individual programs were being handled properly. This switching takes time; in some cases it's currently more efficient to keep code executing on a single CPU core rather than spin it off to another.
Unfortunately, we aren't going to see an OS built like this anytime soon. Probert views don't reflect any work going on at Microsoft and his ideas aren't universally accepted across the Windows architecture team. It's also difficult to model his theory; modern OS's by their very definition don't include the necessary capabilities to do so. Part of what makes Probert's ideas intriguing is that CPU core counts are growing much faster than consumer software that can take advantage of them. Quad-core desktops are available at Dell and HP for under $500; AMD has enthusiast six-core processors on the way. Intel does technically have a six-core chip of its own, but the price tag puts it out of reach for all but the most enthusiastic (and loaded) buyers. It may not happen for another few years, but sooner or later octal-core processors are going to show up in regular desktops. Based on what we've seen the past few years it may take a radically different approach to programming to effectively harness that much parallel hardware on a day-to-day basis.