Hi Not the guy of the above comment but I’d like to chip in :)
I don’t know about the cache, I think I heard something about this and the answer being basically that yes more distance just makes it slower.
About the multithreading:
If the cost of creating Threads is becoming an issue look into the concept of threadpools. They are a neat way of reusing ressources and ensuring you don’t try to have more parallelism than is actually possible.
Edit: if your work is CPU bound, so the cores are actually computing all the time and not waiting on IO or networking, the rule of thumb is to not let the number of threads exceed the number of cores.
As for usecases for servers with these many cores: shared computing for example VM hosts. The amount of VMs you can sensibly host on a server is limited by the amount of cores you have. Depending on the kind of hypervisor you are using you can share cores between VMs but that’s going to make the VMs slower.
Another example of shared computing are HPC clusters where many people schedule some kind of work, the cluster allocates the ressources executes the task and returns the results to you. Having more cores allows more of these tasks to run in parallel effectively increasing the throughput of the cluster.
Hi Not the guy of the above comment but I’d like to chip in :)
I don’t know about the cache, I think I heard something about this and the answer being basically that yes more distance just makes it slower.
About the multithreading:
If the cost of creating Threads is becoming an issue look into the concept of threadpools. They are a neat way of reusing ressources and ensuring you don’t try to have more parallelism than is actually possible.
Edit: if your work is CPU bound, so the cores are actually computing all the time and not waiting on IO or networking, the rule of thumb is to not let the number of threads exceed the number of cores.
As for usecases for servers with these many cores: shared computing for example VM hosts. The amount of VMs you can sensibly host on a server is limited by the amount of cores you have. Depending on the kind of hypervisor you are using you can share cores between VMs but that’s going to make the VMs slower.
Another example of shared computing are HPC clusters where many people schedule some kind of work, the cluster allocates the ressources executes the task and returns the results to you. Having more cores allows more of these tasks to run in parallel effectively increasing the throughput of the cluster.