Derived from the previous one (process concurrency): all sequential programs can be hosted within the same process.
- easier to implement
- easier to debug.
- No fault tolerance still overhead due to the use of kernel objects
- But user space locks can be used
- No dynamic control of parallelism level.
One typical implementation is creating one thread per client in a server.One famous implementation is Node.js where developer write single threaded code that may be run on multiple threads if the runtime needs it to.
My opinion: Still being the dominant model, first implementations were transposition of the multi-process model, for which it brings greatly improved performance. It is also well suited for session oriented servers/services where each thread is in charge of a session/client.
it does not scale that well because you need to ensure you have at least one thread per core to get benefits. And then, one often ends up with a system creating far too many threads, but this is a pragmatic solution when metrics fall within the correct range, which is around 50-300 hundreds clients/simultaneous clients/requests.