'What exactly makes Java Virtual Threads better
I am pretty hyped for Project Loom, but there is one thing that I can't fully understand.
Most Java servers use thread pools with a certain limit of threads (200, 300 ..), however you are not limited by the OS to spawn many more, I've read that with special configurations for Linux you can reach huge numbers.
OS threads are more expensive and they are slower to start/stop, have to deal with context switching (magnified by their number) and you are dependent on the OS which might refuse to give you more threads.
Having said that virtual threads also consume similar amounts of memory (or at least that is what I understood). With Loom we get tail-call optimizations which should reduce the memory usage. Also synchronization and thread context copy should still be a problem of a similar size.
Indeed you are able to spawn millions of Virtual Threads
public static void main(String[] args) {
for (int i = 0; i < 1_000_000; i++) {
Thread.startVirtualThread(() -> {
try {
Thread.sleep(1000);
} catch (Exception e) {
e.printStackTrace();
}
});
}
}
the code above breaks at around 25k with OOM exception when I use Platform threads.
My question is what exactly makes these threads so light, what is preventing us from spawning 1 million platform threads and work with them, is it only the context switching that makes the regular threads so "heavy".
One very similar question
Things I found so far:
- Context Switching is expensive. Generally speaking even in the ideal case where the OS knows how the threads would behave it will still have to give each thread an equal chance to execute, given they have the same priority. If we spawn 10k OS threads it will have to constantly switch between them and this task alone can occupy up to 80% of the CPU time in some cases, so we have to be very careful with the numbers. With Virtual Threads context switching is done by the JVM which makes it basically free
- Cheap start/stop. When we interrupt a thread we essentially tell the task, "Kill the OS thread you are running on". However if for example that thread is in a thread pool, by the time we are asking, the thread might be released by the current task and then given to another and the other task might get the interruption signal. This makes the interruption process quite complex. Virtual Threads are simply objects that live in the heap, we can just let the GC collect them in the background
- Hard upper limits (tens of thousands at most) of threads, due to the way the OS is handling them. The OS can’t be fine tuned to the specific applications and programming language so it has to prepare for the worst case scenario memory wise. It has to allocate more memory that will actually be used to accommodate all needs. While doing all of this it has to ensure that the vital OS processes are still working. With VT you are only limited by the memory which is cheap
- Thread that performs a transaction behaves very differently than a Thread that does video processing, again the OS has to prepare for the worst case scenario and accommodate both cases the best way it can, which means we get suboptimal performance in most cases. Since VT are spawned and managed by Java itself, this allows for full control over them and task specific optimizations that are not bound to the OS
- Resizable stack. The OS gives Threads a big stack to fit all use cases, Virtual Threads have resizable stack that lives in the heap space, it is dynamically resized to fit the problem which makes it smaller
- Smaller metadata size. Platform threads use 1MB as mentioned above, where as Virtual Threads need 200-300 bytes to store their metadata
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|