I had an epiphany earlier today: the CPU Sharing paradigm never changed throughout IT history. But in the meantime, use cases have significantly evolved. CPU power was among the first resource to be virtualized, with mainframes loaded with time-sharing OSes. With the lowering cost of computers, the number of users per CPU steadily decreased until it reached the one user per processor with the Personal Computer. Going on that same trend we now have many cores per user!
But basically computer architecture is designed to manage several tasks at once more than to execute a couple of tasks faster. This translate to faster cache-memory but with added cache synchronization problem, the lack of adequate primitive and the over-dominance of the thread as the program scheduling abstraction.
Of course, CPU firms’ have been trying to address those issues for several years now, by reducing cross cache latency, ensuring finer cache line definitions… But the point is that we are stuck with the same entities and abstraction has we were 30 years ago: stacks, threads and locks.
A paradigm shift is mandatory to permit to leverage on many cores architecture, probably by offering new abstractions, such as context instead of stacks, or tasks instead of threads and sequences instead of locks.