'If async-await doesn't create any additional threads, then how does it make applications responsive?

Time and time again, I see it said that using async-await doesn't create any additional threads. That doesn't make sense because the only ways that a computer can appear to be doing more than 1 thing at a time is

  • Actually doing more than 1 thing at a time (executing in parallel, making use of multiple processors)
  • Simulating it by scheduling tasks and switching between them (do a little bit of A, a little bit of B, a little bit of A, etc.)

So if async-await does neither of those, then how can it make an application responsive? If there is only 1 thread, then calling any method means waiting for the method to complete before doing anything else, and the methods inside that method have to wait for the result before proceeding, and so forth.



Solution 1:[1]

Actually, async/await is not that magical. The full topic is quite broad but for a quick yet complete enough answer to your question I think we can manage.

Let's tackle a simple button click event in a Windows Forms application:

public async void button1_Click(object sender, EventArgs e)
{
    Console.WriteLine("before awaiting");
    await GetSomethingAsync();
    Console.WriteLine("after awaiting");
}

I'm going to explicitly not talk about whatever it is GetSomethingAsync is returning for now. Let's just say this is something that will complete after, say, 2 seconds.

In a traditional, non-asynchronous, world, your button click event handler would look something like this:

public void button1_Click(object sender, EventArgs e)
{
    Console.WriteLine("before waiting");
    DoSomethingThatTakes2Seconds();
    Console.WriteLine("after waiting");
}

When you click the button in the form, the application will appear to freeze for around 2 seconds, while we wait for this method to complete. What happens is that the "message pump", basically a loop, is blocked.

This loop continuously asks windows "Has anyone done something, like moved the mouse, clicked on something? Do I need to repaint something? If so, tell me!" and then processes that "something". This loop got a message that the user clicked on "button1" (or the equivalent type of message from Windows), and ended up calling our button1_Click method above. Until this method returns, this loop is now stuck waiting. This takes 2 seconds and during this, no messages are being processed.

Most things that deal with windows are done using messages, which means that if the message loop stops pumping messages, even for just a second, it is quickly noticeable by the user. For instance, if you move notepad or any other program on top of your own program, and then away again, a flurry of paint messages are sent to your program indicating which region of the window that now suddenly became visible again. If the message loop that processes these messages is waiting for something, blocked, then no painting is done.

So, if in the first example, async/await doesn't create new threads, how does it do it?

Well, what happens is that your method is split into two. This is one of those broad topic type of things so I won't go into too much detail but suffice to say the method is split into these two things:

  1. All the code leading up to await, including the call to GetSomethingAsync
  2. All the code following await

Illustration:

code... code... code... await X(); ... code... code... code...

Rearranged:

code... code... code... var x = X(); await X; code... code... code...
^                                  ^          ^                     ^
+---- portion 1 -------------------+          +---- portion 2 ------+

Basically the method executes like this:

  1. It executes everything up to await

  2. It calls the GetSomethingAsync method, which does its thing, and returns something that will complete 2 seconds in the future

    So far we're still inside the original call to button1_Click, happening on the main thread, called from the message loop. If the code leading up to await takes a lot of time, the UI will still freeze. In our example, not so much

  3. What the await keyword, together with some clever compiler magic, does is that it basically something like "Ok, you know what, I'm going to simply return from the button click event handler here. When you (as in, the thing we're waiting for) get around to completing, let me know because I still have some code left to execute".

    Actually it will let the SynchronizationContext class know that it is done, which, depending on the actual synchronization context that is in play right now, will queue up for execution. The context class used in a Windows Forms program will queue it using the queue that the message loop is pumping.

  4. So it returns back to the message loop, which is now free to continue pumping messages, like moving the window, resizing it, or clicking other buttons.

    For the user, the UI is now responsive again, processing other button clicks, resizing and most importantly, redrawing, so it doesn't appear to freeze.

  5. 2 seconds later, the thing we're waiting for completes and what happens now is that it (well, the synchronization context) places a message into the queue that the message loop is looking at, saying "Hey, I got some more code for you to execute", and this code is all the code after the await.

  6. When the message loop gets to that message, it will basically "re-enter" that method where it left off, just after await and continue executing the rest of the method. Note that this code is again called from the message loop so if this code happens to do something lengthy without using async/await properly, it will again block the message loop

There are many moving parts under the hood here so here are some links to more information, I was going to say "should you need it", but this topic is quite broad and it is fairly important to know some of those moving parts. Invariably you're going to understand that async/await is still a leaky concept. Some of the underlying limitations and problems still leak up into the surrounding code, and if they don't, you usually end up having to debug an application that breaks randomly for seemingly no good reason.


OK, so what if GetSomethingAsync spins up a thread that will complete in 2 seconds? Yes, then obviously there is a new thread in play. This thread, however, is not because of the async-ness of this method, it is because the programmer of this method chose a thread to implement asynchronous code. Almost all asynchronous I/O don't use a thread, they use different things. async/await by themselves do not spin up new threads but obviously the "things we wait for" may be implemented using threads.

There are many things in .NET that do not necessarily spin up a thread on their own but are still asynchronous:

  • Web requests (and many other network related things that takes time)
  • Asynchronous file reading and writing
  • and many more, a good sign is if the class/interface in question has methods named SomethingSomethingAsync or BeginSomething and EndSomething and there's an IAsyncResult involved.

Usually these things do not use a thread under the hood.


OK, so you want some of that "broad topic stuff"?

Well, let's ask Try Roslyn about our button click:

Try Roslyn

I'm not going to link in the full generated class here but it's pretty gory stuff.

Solution 2:[2]

I explain it in full in my blog post There Is No Thread.

In summary, modern I/O systems make heavy use of DMA (Direct Memory Access). There are special, dedicated processors on network cards, video cards, HDD controllers, serial/parallel ports, etc. These processors have direct access to the memory bus, and handle reading/writing completely independently of the CPU. The CPU just needs to notify the device of the location in memory containing the data, and then can do its own thing until the device raises an interrupt notifying the CPU that the read/write is complete.

Once the operation is in flight, there is no work for the CPU to do, and thus no thread.

Solution 3:[3]

the only ways that a computer can appear to be doing more than 1 thing at a time is (1) Actually doing more than 1 thing at a time, (2) simulating it by scheduling tasks and switching between them. So if async-await does neither of those

It's not that await does neither of those. Remember, the purpose of await is not to make synchronous code magically asynchronous. It's to enable using the same techniques we use for writing synchronous code when calling into asynchronous code. Await is about making the code that uses high latency operations look like code that uses low latency operations. Those high latency operations might be on threads, they might be on special purpose hardware, they might be tearing their work up into little pieces and putting it in the message queue for processing by the UI thread later. They're doing something to achieve asynchrony, but they are the ones that are doing it. Await just lets you take advantage of that asynchrony.

Also, I think you are missing a third option. We old people -- kids today with their rap music should get off my lawn, etc -- remember the world of Windows in the early 1990s. There were no multi-CPU machines and no thread schedulers. You wanted to run two Windows apps at the same time, you had to yield. Multitasking was cooperative. The OS tells a process that it gets to run, and if it is ill-behaved, it starves all the other processes from being served. It runs until it yields, and somehow it has to know how to pick up where it left off the next time the OS hands control back to it. Single-threaded asynchronous code is a lot like that, with "await" instead of "yield". Awaiting means "I'm going to remember where I left off here, and let someone else run for a while; call me back when the task I'm waiting on is complete, and I'll pick up where I left off." I think you can see how that makes apps more responsive, just as it did in the Windows 3 days.

calling any method means waiting for the method to complete

There is the key that you are missing. A method can return before its work is complete. That is the essence of asynchrony right there. A method returns, it returns a task that means "this work is in progress; tell me what to do when it is complete". The work of the method is not done, even though it has returned.

Before the await operator, you had to write code that looked like spaghetti threaded through swiss cheese to deal with the fact that we have work to do after completion, but with the return and the completion desynchronized. Await allows you to write code that looks like the return and the completion are synchronized, without them actually being synchronized.

Solution 4:[4]

I am really glad someone asked this question, because for the longest time I also believed threads were necessary to concurrency. When I first saw event loops, I thought they were a lie. I thought to myself "there's no way this code can be concurrent if it runs in a single thread". Keep in mind this is after I already had gone through the struggle of understanding the difference between concurrency and parallelism.

After research of my own, I finally found the missing piece: select(). Specifically, IO multiplexing, implemented by various kernels under different names: select(), poll(), epoll(), kqueue(). These are system calls that, while the implementation details differ, allow you to pass in a set of file descriptors to watch. Then you can make another call that blocks until the one of the watched file descriptors changes.

Thus, one can wait on a set of IO events (the main event loop), handle the first event that completes, and then yield control back to the event loop. Rinse and repeat.

How does this work? Well, the short answer is that it's kernel and hardware-level magic. There are many components in a computer besides the CPU, and these components can work in parallel. The kernel can control these devices and communicate directly with them to receive certain signals.

These IO multiplexing system calls are the fundamental building block of single-threaded event loops like node.js or Tornado. When you await a function, you are watching for a certain event (that function's completion), and then yielding control back to the main event loop. When the event you are watching completes, the function (eventually) picks up from where it left off. Functions that allow you to suspend and resume computation like this are called coroutines.

Solution 5:[5]

await and async use Tasks not Threads.

The framework has a pool of threads ready to execute some work in the form of Task objects; submitting a Task to the pool means selecting a free, already existing1, thread to call the task action method.
Creating a Task is matter of creating a new object, far way faster than creating a new thread.

Given a Task is possible to attach a Continuation to it, it is a new Task object to be executed once the thread ends.

Since async/await use Tasks they don't create a new thread.


While interrupt programming technique are used widely in every modern OS, I don't think they are relevant here.
You can have two CPU bonded tasks executing in parallel (interleaved actually) in a single CPU using aysnc/await.
That could not be explained simply with the fact that the OS support queuing IORP.


Last time I checked the compiler transformed async methods into DFA, the work is divided into steps, each one terminating with an await instruction.
The await starts its Task and attach it a continuation to execute the next step.

As a concept example, here is a pseudo-code example.
Things are being simplified for the sake of clarity and because I don't remember all the details exactly.

method:
   instr1                  
   instr2
   await task1
   instr3
   instr4
   await task2
   instr5
   return value

It get transformed into something like this

int state = 0;

Task nextStep()
{
  switch (state)
  {
     case 0:
        instr1;
        instr2;
        state = 1;

        task1.addContinuation(nextStep());
        task1.start();

        return task1;

     case 1:
        instr3;
        instr4;
        state = 2;

        task2.addContinuation(nextStep());
        task2.start();

        return task2;

     case 2:
        instr5;
        state = 0;

        task3 = new Task();
        task3.setResult(value);
        task3.setCompleted();

        return task3;
   }
}

method:
   nextStep();

1 Actually a pool can have its task creation policy.

Solution 6:[6]

Here is how I view all this, it may not be super technically accurate but it helps me, at least :).

There are basically two types of processing (computation) that happen on a machine:

  • processing that happen on the CPU
  • processing that happen on other processors (GPU, network card, etc.), let's call them IO.

So, when we write a piece of source code, after compilation, depending on the object we use (and this is very important), processing will be CPU bound, or IO bound, and in fact, it can be bound to a combination of both.

Some examples:

  • if I use the Write method of the FileStream object (which is a Stream), processing will be say, 1% CPU bound, and 99% IO bound.
  • if I use the Write method of the NetworkStream object (which is a Stream), processing will be say, 1% CPU bound, and 99% IO bound.
  • if I use the Write method of the Memorystream object (which is a Stream), processing will be 100% CPU bound.

So, as you see, from an object-oriented programmer point-of-view, although I'm always accessing a Stream object, what happens beneath may depend heavily on the ultimate type of the object.

Now, to optimize things, it's sometimes useful to be able to run code in parallel (note I don't use the word asynchronous) if it's possible and/or necessary.

Some examples:

  • In a desktop app, I want to print a document, but I don't want to wait for it.
  • My web server servers many clients at the same time, each one getting his pages in parallel (not serialized).

Before async / await, we essentially had two solutions to this:

  • Threads. It was relatively easy to use, with Thread and ThreadPool classes. Threads are CPU bound only.
  • The "old" Begin/End/AsyncCallback asynchronous programming model. It's just a model, it doesn't tell you if you'll be CPU or IO bound. If you take a look at the Socket or FileStream classes, it's IO bound, which is cool, but we rarely use it.

The async / await is only a common programming model, based on the Task concept. It's a bit easier to use than threads or thread pools for CPU bound tasks, and much easier to use than the old Begin/End model. Undercovers, however, it's "just" a super sophisticated feature-full wrapper on both.

So, the real win is mostly on IO Bound tasks, task that don't use the CPU, but async/await is still only a programming model, it doesn't help you to determine how/where processing will happen in the end.

It means it's not because a class has a method "DoSomethingAsync" returning a Task object that you can presume it will be CPU bound (which means it maybe quite useless, especially if it doesn't have a cancellation token parameter), or IO Bound (which means it's probably a must), or a combination of both (since the model is quite viral, bonding and potential benefits can be, in the end, super mixed and not so obvious).

So, coming back to my examples, doing my Write operations using async/await on MemoryStream will stay CPU bound (I will probably not benefit from it), although I will surely benefit from it with files and network streams.

Solution 7:[7]

I'm not going to compete with Eric Lippert or Lasse V. Karlsen, and others, I just would like to draw attention to another facet of this question, that I think was not explicitly mentioned.

Using await on it's own does not make your app magically responsive. If whatever you do in the method you are awaiting on from the UI thread blocks, it will still block your UI the same way as non-awaitable version would.

You have to write your awaitable method specifically so it either spawn a new thread or use a something like a completion port (which will return execution in the current thread and call something else for continuation whenever completion port gets signaled). But this part is well explained in other answers.

Solution 8:[8]

I try to explain it bottom up. Maybe someone find it helpful. I was there, done that, reinvented it, when made simple games in DOS in Pascal (good old times...)

So... Every event driven application has an event loop inside that's something like this:

while (getMessage(out message)) // pseudo-code
{
   dispatchMessage(message); // pseudo-code
}

Frameworks usually hide this detail from you but it's there. The getMessage function reads the next event from the event queue or waits until an event happens: mouse move, keydown, keyup, click, etc. And then dispatchMessage dispatches the event to the appropriate event handler. Then waits for the next event and so on until a quit event comes that exits the loop and finishes the application.

Event handlers should run fast so the event loop can poll for more events and the UI remains responsive. What happens if a button click triggers an expensive operation like this?

void expensiveOperation()
{
    for (int i = 0; i < 1000; i++)
    {
        Thread.Sleep(10);
    }
}

Well the UI becomes nonresponsive until the 10 second operation finishes as the control stays within the function. To solve this problem you need to break up the task into small parts that can execute quickly. This means you cannot handle the whole thing in a single event. You must do a small part of the work, then post another event to the event queue to ask for continuation.

So you would change this to:

void expensiveOperation()
{
    doIteration(0);
}

void doIteration(int i)
{
    if (i >= 1000) return;
    Thread.Sleep(10); // Do a piece of work.
    postFunctionCallMessage(() => {doIteration(i + 1);}); // Pseudo code. 
}

In this case only the first iteration runs then it posts a message to the event queue to run the next iteration and returns. It our example postFunctionCallMessage pseudo function puts a "call this function" event to the queue, so the event dispatcher will call it when it reaches it. This allows all other GUI events to be processed while continuously running pieces of a long running work as well.

As long as this long running task is running, its continuation event is always in the event queue. So you basically invented your own task scheduler. Where the continuation events in the queue are "processes" that are running. Actually this what operating systems do, except that the sending of the continuation events and returning to the scheduler loop is done via the CPU's timer interrupt where the OS registered the context switching code, so you don't need to care about it. But here you are writing your own scheduler so you do need to care about it - so far.

So we can run long running tasks in a single thread parallel with the GUI by breaking up them into small chunks and sending continuation events. This is the general idea of the Task class. It represents a piece of work and when you call .ContinueWith on it, you define what function to call as the next piece when the current piece finishes (and its return value is passed to the continuation). But doing all this chaining splitting up work into small pieces manually is a cumbersome work and totally messes up the layout of the logic, because the entire background task code basically a .ContinueWith mess. So this is where the compiler helps you. It does all this chaining and continuation for you under the hood. When you say await you tell the compiler that "stop here, add the rest of the function as a continuation task". The compiler takes care of the rest, so you don't have to.

While this task piece chaining doesn't involve creating threads and when the pieces are small they can be scheduled on the main thread's event loop, in practice there is a worker thread pool that runs the Tasks. This allows better utilization of CPU cores and also allows the developer to run a manually written long Task (which would block a worker thread instead of the main thread).

Solution 9:[9]

Summarizing other answers:

Async/await is generally created for IO bound tasks as by using them, the calling thread doesn't need to be blocked. This is especially useful in case of UI threads as we can ensure that they remain responsive while a background operation is being performed (like fetching data to be displayed from a remote server)

Async doesn't create it's own thread. The thread of the calling method is used to execute the async method till it finds an awaitable. The same thread then continues to execute the rest of the calling method beyond the async method call. Note that within the called async method, after returning from the awaitable, the reminder of the method could be executed using a thread from the thread pool - the only place a separate thread comes into picture.

Solution 10:[10]

This is not directly answering the question, but I think it is an interessting additonal information:

Async and await does not create new threads by itself. BUT depending on where you use async await, the synchronous part BEFORE the await may run on a different thread than the synchroous part AFTER the await (for example ASP.NET and ASP.NET core behave differently).

In UI-Thread based applications (WinForms, WPF) you will be on the same thread before and after. But when you use async away on a Thread pool thread, the thread before and after the wait may not be the same.

A great video on this topic

Solution 11:[11]

Actually, async await chains are state machine generated by CLR compiler.

async await however does use threads that TPL are using thread pool to execute tasks.

The reason the application is not blocked is that the state machine can decides which co-routine to execute, repeat, check, and decides again.

Further reading:

What does async & await generate?

Async Await and the Generated StateMachine

Asynchronous C# and F# (III.): How does it work? - Tomas Petricek

Edit:

Okay. It seems like my elaboration is incorrect. However I do have to point out that state machines are important assets for async awaits. Even if you take-in asynchronous I/O you still need a helper to check if the operation is complete therefore we still need a state machine and determine which routine can be executed asychronously together.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Formalist
Solution 2 Stephen Cleary
Solution 3 BoltClock
Solution 4 Community
Solution 5 Margaret Bloom
Solution 6
Solution 7
Solution 8
Solution 9
Solution 10 Welcor
Solution 11