Mon, November 3, 2008, 02:24 AM under
ParallelComputing
To take advantage of multiple cores from our software, ultimately threads have to be used. Because of this fact, some developers fall in the trap of
equating multithreading to parallelism. That
is not accurate.
You can have multithreading on a single core machine, but you can only have parallelism on a multi core machine (or multi proc, but I treat them the same). The quick test:
If on a single core machine you are using threads and it makes perfect sense for your scenario, then you are not "doing parallelism", you are just doing multithreading. If that same code runs on a multi core machine, any overall speedups that you may observe are accidental – you did not "think parallelism".
The mainstream devs that I know claim they are comfortable with multithreading and when you drill into "what scenarios they are enabling when using threads" there are 2 patterns that emerge. The first has to do with keeping the UI responsive (and the
UI thread affinity issue): spin a thread to carry out some work and then marshal the results back to the UI (and in advanced scenarios, communicate progress events and also offer cancellation). The second has to do with
I/O of one form or another: call a web service on one thread and while waiting for the results, do some other work on another; or carry out some file operation asynchronously and continue doing work on the initiating thread. When the results from the (disc/network/etc) IO operation are available, some synchronization takes place to merge the data from the executor to the requestor. The .NET framework has very good support for all of the above, for example with things like the
BackgroundWorker and the
APM pattern (BeginXYZ/EndXYZ) or even the
event-based asynchronous pattern. Ultimately, it all comes down to using the
ThreadPool directly or indirectly.
The previous paragraph summarized the opportunities (and touched on the challenges) of leveraging concurrency on a single core machine (like most of us do today). The end user goal is to improve the
perceived performance or to perhaps improve the overall performance by
hiding latency.
All of the above is applicable on multi-core machines too, but it is not parallelism. On a multi-core machine there is an additional opportunity to improve the
actual performance of your compute bound operations, by bringing parallel programming into the picture. (More on this in another post).
Another way of putting it is that on a single core you can use threads and you can have concurrency, but to achieve parallelism on a multi-core box you have to identify in your code the
exploitable concurrency: the
portions of your code that can truly run at the same time.