What is the difference between a ‘Thread’ and a ‘Core’?
These two terms have some serious confusion around them. This is because there is a disparity between hardware and software use, although both are similar.
In hardware, a ‘thread’ usually means a logical core. It may or may not be a physical core (e.g. Hyper-Threaded). The image above shows a good graphical representation of this.
In software, a ‘thread’ depends on the OS type, but is a single continuous piece of code executing – or waiting to execute. A thread can only be placed on a single core at a time, though it can be swapped around. If swapped around too much, it will cause core thrashing, which means the CPU cache is lost, inhibiting its performance. So, Windows and other OSes try to keep a thread on the same core, and must keep a thread on a single core at a time, explaining why you may see core 1 with 100% utilization and core 3 at 0%. Of course, from time to time, to balance thermal load, they will switch a CPU bound (CPU consuming) thread to another core.
In Windows, a process contains multiple threads. In Linux, a process doesn’t use threads often. Instead, it forks itself, so acts more like a thread itself. I don’t want to get too deep into forking, as then I have to mention virtual memory sharing, copy-on-write, and all sorts of advanced OS concepts. BUT, basically, whether on Windows or Linux, a duplicated instance of a process or module will be able to share its virtual memory with other instances. That saves RAM, as you only need one copy in RAM, even if it referenced by more than one process. When a virtual memory page is written to, it is then made exclusive to *that* process, hence the copy-on-write.
Windows and other OSes try to keep a thread on the same core, and must keep a thread on a single core at a time, explaining why you may see core 1 with 100% utilization and core 3 at 0%.
Aside from the last tangent, I hope this helps explain cores vs threads. If you have questions, ask below!