Asynchronous programming with async, await, Task in C#

C# and .NET Framework (4.5 & Core) supports asynchronous programming using some native functions, classes, and reserved keywords.

Before we see what is asynchronous programming, let's understand what is synchronous programming using the following console example.

In the above example, the LongProcess() method is some long-running task such as reading a file from the server, calling a web API that returns a large amount of data or uploading or downloading a big file. It takes a little longer time to execute ( Thread.Sleep(4000) holds it for 4 seconds just to show long execution time). The ShortProcess() is a simple method that gets executed after the LongProcess() method.

The above program executes synchronously. It means execution starts from the Main() method wherein it first executes the LongProcess() method and then ShortProcess() method. During the execution, an application gets blocked and becomes unresponsive (You can see this in Windows-based applications mainly). This is called synchronous programming where execution does not go to next line until the current line executed completely.

What is Asynchronous Programming?

In asynchronous programming, the code gets executed in a thread without having to wait for an I/O-bound or long-running task to finish. For example, in the asynchronous programming model, the LongProcess() method will be executed in a separate thread from the thread pool, and the main application thread will continue to execute the next statement.

Microsoft recommends Task-based Asynchronous Pattern  to implement asynchronous programming in the .NET Framework or .NET Core applications using async , await keywords and Task or Task<TResult> class.

Now let's rewrite the above example in asynchronous pattern using async keyword.

In the above example, the Main() method is marked by the async keyword, and the return type is Task . The async keyword marks the method as asynchronous. Note that all the methods in the method chain must be async in order to implement asynchronous programming. So, the Main() method must be async to make child methods asynchronous.

The LongProcess() method is also marked with the async keyword which makes it asynchronous. The await Task.Delay(4000); holds the thread execute for 4 seconds.

Now, the program starts executing from the async Main() method in the main application thread. The async LongProcess() method gets executed in a separate thread and the main application thread continues execution of the next statement which calls ShortProcess() method and does not wait for the LongProcess() to complete.

async, await, and Task

Use async along with await and Task if the async method returns a value back to the calling code. We used only the async keyword in the above program to demonstrate the simple asynchronous void method.

The await keyword waits for the async method until it returns a value. So the main application thread stops there until it receives a return value.

The Task class represents an asynchronous operation and Task<TResult> generic class represents an operation that can return a value. In the above example, we used await Task.Delay(4000) that started async operation that sleeps for 4 seconds and await holds a thread until 4 seconds.

The following demonstrates the async method that returns a value.

In the above example, in the static async Task<int> LongProcess() method, Task<int> is used to indicate the return value type int. int val = await result; will stop the main thread there until it gets the return value populated in the result. Once get the value in the result variable, it then automatically assigns an integer to val .

An async method should return void ,  Task , or  Task<TResult> , where TResult is the return type of the async method. Returning void is normally used for event handlers. The async keyword allows us to use the await keyword within the method so that we can wait for the asynchronous method to complete for other methods which are dependent on the return value.

If you have multiple async methods that return the values then you can use await for all methods just before you want to use the return value in further steps.

In the above program, we do await result1 and await result2 just before we need to pass the return value to another method.

Thus, you can use async , await, and Task to implement asynchronous programming in .NET Framework or .NET Core using C#.

  • How to get the sizeof a datatype in C#?
  • Difference between String and StringBuilder in C#
  • Static vs Singleton in C#
  • Difference between == and Equals() Method in C#
  • How to loop through an enum in C#?
  • Generate Random Numbers in C#
  • Difference between Two Dates in C#
  • Convert int to enum in C#
  • BigInteger Data Type in C#
  • Convert String to Enum in C#
  • Convert an Object to JSON in C#
  • Convert JSON String to Object in C#
  • DateTime Formats in C#
  • How to convert date object to string in C#?
  • Compare strings in C#
  • How to count elements in C# array?
  • Difference between String and string in C#.
  • How to get a comma separated string from an array in C#?
  • Boxing and Unboxing in C#
  • How to convert string to int in C#?

task method in c#

We are a team of passionate developers, educators, and technology enthusiasts who, with their combined expertise and experience, create in-depth, comprehensive, and easy to understand tutorials. We focus on a blend of theoretical explanations and practical examples to encourages hands-on learning. Learn more about us

  • Entrepreneur
  • Productivity

Home » C# Concurrency » C# Task

Summary : in this tutorial, you’ll learn about the task-based asynchronous pattern (TAP) in C# and how to use the C# Task class to create asynchronous operations.

Introduction to the task-based asynchronous pattern (TAP)

C# introduced an asynchronous programming model (APM) that provides a way to perform I/O bound operations asynchronously.

The APM is based on callback concepts:

  • A method that represents an asynchronous operation that accepts a callback.
  • When the asynchronous operation completes, the method invokes the callback to notify the calling code.

Because APM is quite difficult to understand, C# introduced the event-based asynchronous pattern (EAP) that performs asynchronous operations using events.

Unlike the APM, in EAP, the method raises an event when the asynchronous operation completes instead of calling a callback.

EAP is easier to use and has better error handling than the APM. However, EAP is quite complex to use.

To solve this issue, C# introduced task-based asynchronous programming (TAP) that greatly simplifies asynchronous programming and makes it easier to write asynchronous code.

TAP consists of the following key components:

  • The Task class – represents an asynchronous operation.
  • The async / await keywords – define asynchronous methods and wait for the completion of asynchronous operations.
  • Task-based API – a set of classes that work seamlessly with the Task class and async/await keywords.

TAP has the following advantages:

  • Improved performance – TAP can improve an application’s performance by allowing it to perform I/O-bound operations asynchronously, freeing up the CPU for other tasks.
  • Simplified code – TAP allows you to write asynchronous code like synchronous code that makes it easy to understand.
  • Better resource management – TAP optimizes system resources by allowing applications to perform asynchronous operations without blocking threads.

In this tutorial, we’ll focus on the Task class, and how to use it to execute asynchronous operations.

The Task class

The Task class is a core concept of the TAP. It represents an asynchronous operation that can be executed in various ways.

Suppose you have a method that performs an asynchronous operation called GetRandomNumber() that returns a random number between 1 and 100 like this:

Unlike a regular function, the GetRandomNumber() uses the Thread.Sleep () to delay one second before returning a random number. The purpose of the Thread.Sleep() is to simulate an asynchronous operation that takes about one second to complete.

Running a task

To execute the GetRandomNumber() method asynchronously, you create a new Task object and call the GetRandomNumber() method in a lambda expression passed to the Task ‘s constructor:

and start executing the task by calling the Start() method of the Task object:

Put it all together:

Note that the task.Start () method doesn’t block the main thread therefore you see the following message first:

….before the random number:

The Console.ReadLine() blocks the main thread until you press a key. It is used for waiting for the child thread scheduled by the Task object to complete.

If you don’t block the main thread, it’ll be terminated after the program displays the message “Start the program…”.

Notice that the Task constructor accepts many other options that you don’t need to worry about for now.

Behind the scenes, the program uses a thread pool for executing the asynchronous operation. The Start() method schedules the operation for execution.

To prove this, we can display the thread id and whether or not the thread belongs to the managed thread pool:

The output shows that the thread id is 5 and the thread belongs to a thread pool. Note that you likely see a different number.

Since the code for creating a Task object and starting it are quite verbose, you can shorten it by using the Run() static method of the Task class:

The Run() method queues operation ( GetRandomNumber ) to the thread pool for execution.

Similarly, you can use the StartNew() method of the Factory object of the Task class to create a new task and schedule its execution:

Getting the result from a task

The Run() method returns a Task< TResult > object that represents the result of the asynchronous operation.

In our example, the GetRandomNumber() returns an integer, therefore, the Task.Run () returns the Task<int> object:

To get the returned number of the GetRandomNumber() method, you use the Result property of the task object:

Put it all together.

  • Use task-based asynchronous programming (TAP) for developing asynchronous programs.
  • Use the Task class to execute asynchronous operations.

Dot Net Tutorials

Task-Based Asynchronous Programming in C#

Back to: C#.NET Tutorials For Beginners and Professionals

In this article, I will discuss Task-Based Asynchronous Programming in C# with Examples. Please read our previous article, which discusses How to Control the Result of a Task in C# using TaskCompletionSource with Examples. In C#, the task is used to implement Asynchronous Programming, i.e., executing operations asynchronously, and it was introduced with .NET Framework 4.0. Before understanding theory, i.e., what is Task and the benefits of using Task, let us first discuss how to create and use Task in C#.

Working with Task in C#:

The Task-related classes belong to the System.Threading.Tasks namespace. So, the first and foremost step for you is to import the System.Threading.Tasks namespace in your program. Then, you can create and access the task objects using the Task class.

Note: In general, the Task class will always represent a single operation, and that operation will be executed asynchronously on a thread pool thread rather than synchronously on the application’s main thread. 

Example to Understand Task Class and Start Method in C#

We have already discussed async and await operators to create and execute the asynchronous methods. Now, let us try to understand how to implement asynchronous programming using the Task class. In the example below, we create the task object by using the Task class and then execute the method asynchronously by calling the Start method on the Task object. The method pointed by the Task object will be executed when we call the Start method.

In the above example, we created the task object, i.e., task1, using the Task class and then called the Start method to start the task execution. Task object task1 will be executed asynchronously on a thread pool thread. Here, the Task class constructor expects one Action delegate. You can create an instance of the Action delegate and pass that action delegate instance as a parameter to the constructor, or you can directly pass a method whose signature is the same as the Action delegate. When you run the above application, you will get the following output.

Task-based Asynchronous Programming in C#

As you can see in the above output, two threads are used to execute the application code. The main thread and the child thread. And you can observe both threads are running asynchronously.

Example to Understand How to Create a Task Object Using Factory Property in C#

In the previous example, the method will execute asynchronously when invoking the Start method. In the following example, we are creating the task object using the Factory property, which will start automatically, which means it will start executing the method immediately. Here, we don’t need to call the Start method.

Here, the Factory property of the Task class will return an instance of the TaskFactory object. The TaskFactory class has one method called StartNew, which will require an Action delegate as a parameter. So, we can create an instance of Action delegate and pass that instance as a parameter to this StartNew method. Alternatively, you can directly pass a method matching the Action delegate signature. For a better understanding, please have a look at the following example.

It will give you the same output as the previous example. The only difference between the previous and this example is that we create and run the task asynchronously using a single statement.

Example: Creating a Task Object using the Run method

In the following example, we create a task using the Run method of the Task class.

So, we have discussed three different ways to create and start a task in C#. From a performance point of view, Task.Run or Task.Factory.StartNew methods are preferable to create and start executing the tasks asynchronously. But, if you want the task creation and execution separately, you need to create the task separately by using the Task class and then call the Start method to start the task execution when required.

Task using Wait in C#:

As we already discussed, the tasks will run asynchronously on the thread pool thread, and the thread will start the task execution asynchronously along with the application’s main thread. So far, the examples we discussed in this article, the child thread will continue its execution until it finishes its task, even after completing the main thread execution of the application.

If you want to make the main thread execution wait until all child tasks are completed, then you need to use the Wait method of the Task class. The Wait method of the Task class will block the execution of other threads until the assigned task has completed its execution. In the following example, we call the Wait() method on the task1 object to make the program execution wait until task1 completes its execution.

As you can see in the above code, we are calling the Wait() method on the task object, i.e., task1. So, the main thread execution will wait until the task1 object completes its execution. Now run the application and see the output shown in the image below.

Wait method in Task-based Asynchronous Programming

Task using Anonymous Method and Lambda Expression in C#:

In all our previous examples, we have executed a method using the Task. We have also seen that the Task class constructor, the Run method, or the StartNew method expect one Action delegate. So, instead of executing a method, we have also executed the logic using the Anonymous Method and Lambda Expression. For a better understanding, please have a look at the following example.

So, we have discussed how to work with tasks using different approaches. Now let us discuss what Task is and why we should use Task.

What Is a Task in C#?

A task in C# is used to implement Task-based Asynchronous Programming and was introduced with the .NET Framework 4. The Task object is typically executed asynchronously on a thread pool thread rather than synchronously on the application’s main thread. A task scheduler is responsible for starting the Task and also responsible for managing it. By default, the Task scheduler uses threads from the thread pool to execute the Task.

The key type used in task-based asynchronous programming is Task and its generic counterpart Task<T>, where T is the result type. The Task class lives in the System.Threading.Tasks namespace and represents an asynchronous operation.

What is a Thread Pool in C#?

A thread pool in C# is a managed pool of threads created and managed by the .NET runtime to execute asynchronous tasks and parallel workloads efficiently. Thread pools are a fundamental component of the .NET Framework and are used to improve the efficiency and performance of multithreaded and asynchronous programming.

So, a Thread Pool in C#  is a collection of  threads that can perform several tasks in the background. Once a thread completes its task, it is returned back to the thread pool. This reusability of threads prevents an application from creating many threads, ultimately using less memory consumption.

Advantages of Task-Based Asynchronous Programming in C#:

Here are some key advantages of using TAP (Task-Based Asynchronous Programming):

Simplified Code Structure:

  • Readability: Task-Based Asynchronous Programming allows for writing asynchronous code similar in structure to synchronous code, improving readability.
  • Maintainability: The code is easier to maintain as it avoids the complexity of callbacks and manual thread management in older models.

Improved Scalability and Performance:

  • Efficient Resource Utilization: Asynchronous operations free up the calling thread (typically UI or server threads) to handle other tasks. This improves resource utilization and responsiveness, particularly in UI applications or web services.
  • Scalability: TAP can enhance the scalability of applications, especially those that handle many concurrent I/O-bound operations.

Language and Framework Support:

  • Language Integration: Task-based asynchronous Programming is seamlessly integrated with C# language features, notably async and await keywords, making asynchronous programming more intuitive.
  • Framework Compatibility: It’s fully supported across the .NET ecosystem, including newer versions of .NET Core and .NET 5/6, ensuring compatibility and ease of use.

Exception Handling:

  • Simplified Exception Handling: Exceptions in asynchronous methods can be caught and handled using standard try-catch blocks, unlike older patterns that require more complex handling.

Composability and Flexibility:

  • Composable Operations: Tasks can be easily combined and composed. For instance, you can await multiple tasks concurrently using Task.WhenAll, or await the first task to complete using Task.WhenAny.
  • Cancellation Support: Task-Based Asynchronous Programming supports cancellation using the CancellationToken class, allowing for responsive cancellation of asynchronous operations.

Unified Model for Asynchronous Operations:

  • Consistency: TAP provides a unified approach for all asynchronous operations, whether CPU-bound or I/O-bound, creating consistency in how asynchronous code is written and understood across different applications.

Progress Reporting:

  • Progress Feedback: The model supports progress reporting out of the box, which is particularly useful in UI applications where you need to update the UI to reflect the progress of an asynchronous operation.

Disadvantages of Task-Based Asynchronous Programming in C#:

  • Complexity in Error Handling: Asynchronous programming can make error handling more complex. Exceptions thrown in asynchronous methods are captured and placed on the Task, and they need to be handled using await or by examining the Task object. Unobserved exceptions can lead to unhandled exceptions.
  • Potential for Deadlocks: Misusing async and await, especially with Task.Result or Task.Wait(), can lead to deadlocks, particularly in UI applications or when blocking on asynchronous code.
  • Resource Management: Asynchronous operations can lead to more complex resource management scenarios. Ensuring that resources are properly disposed of or that certain operations are thread-safe adds complexity to the code.
  • Scalability Issues: Asynchronous programming is great for scalability; improper use (like creating too many tasks or not using I/O-bound asynchronous APIs correctly) can consume many resources and degrade performance.
  • Debugging Difficulty: Debugging asynchronous code can be more difficult than synchronous code. The execution flow is not linear, making it harder to follow and understand, especially when dealing with multiple concurrent asynchronous operations.
  • Overhead: Some overhead is associated with managing the state and context of asynchronous operations. For very small operations, the overhead of setting up the asynchronous operation might outweigh its benefits.
  • Improper Usage: It’s easy to misuse asynchronous programming by applying it where it’s not needed, leading to unnecessary complexity. Not every operation benefits from being made asynchronous.

In the next article, I will discuss Chaining Tasks Using Continuation Tasks in C# with Examples. Here, in this article, I try to explain Task-based Asynchronous Programming in C# using the Task class. I hope you understand how to create and using Task class objects in C#.

About the Author: Pranaya Rout

Pranaya Rout has published more than 3,000 articles in his 11-year career. Pranaya Rout has very good experience with Microsoft Technologies, Including C#, VB, ASP.NET MVC, ASP.NET Web API, EF, EF Core, ADO.NET, LINQ, SQL Server, MYSQL, Oracle, ASP.NET Core, Cloud Computing, Microservices, Design Patterns and still learning new technologies.

4 thoughts on “Task-Based Asynchronous Programming in C#”

Nice Aricle

In the 1st example you provided, the main thread is getting executed 1st, completed then the child thread is executing. This is Synchronous in nature. So, in that case what is the use of the task? Can you please provide a little more detail?

No, It’s not like that, Task will be executed asynchronously in the background. The main thread is also executed parallelly and may or may not be completed before child threads, it is simply unpredictable. If you try to do some time taking task in the main thread, you will get to know that child threads are completed before Main thread.

does this gets checked?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • PyQt5 ebook
  • Tkinter ebook
  • SQLite Python
  • wxPython ebook
  • Windows API ebook
  • Java Swing ebook
  • Java games ebook
  • MySQL Java ebook

last modified July 5, 2023

In this article we show how to use Task for concurrent operations in C#.

Concurrent programming is used for two kinds of tasks: I/O-bound and CPU-boud tasks. Requesting data from a network, accessing a database, or reading and writing are IO-bound tasks. CPU-boud tasks are tasks that are computationally expensive, such as mathematical calculations or graphics processing.

Asynchronous operations are suited for I/O-bound tasks. Parallel operations are suited for CPU-bound tasks. Unlike in other languages, Task can be used for both asynchronous and parallel operations.

Task represents a concurrent operation.

Task represents an concurrent operation, while Task<TResult> represents an concurrent operation that can return a value.

The Task.Run method is used to run CPU-bound code concurrently; ideally in parallel. It queues the specified work to run on the ThreadPool and returns a task or Task<TResult> handle for that work.

.NET contains numerous methods such as StreamReader.ReadLineAsync or HttpClient.GetAsync that execute I/O-bound code asynchronously. They are used together with async/await keywords.

C# Task.Run

The Task.Run method puts a task on a different thread. It is suitable for CPU-bound tasks.

The main thread finishes before the generated task. In order to see the task finished, we use Console.ReadLine which waits for user input.

Task<TResult> represents a task which returns a result.

The program shows how to wait for a task that returns a computation result.

C# Task.Delay

Task.Delay creates a task which completes after a time delay.

The function which creates a task must use the async keyword.

Task.Delay creates a new task, which sleeps for three seconds. The await operator waits for the task to finish. It block execution of the main program until the task is finished.

C# async Main method

When we are using the await operator inside the Main method, we have to mark it with the async modifier.

This is a sample text file.

The example reads the first line of a file asynchronously. The work is done inside the Main method.

The ReadLineAsync method returns a Task<String> that represents an asynchronous read operation. The result in a task contains the next line from the stream, or is null if all the characters have been read.

C# Task.WaitAll

The Task.WaitAll method waits for all of the provided tasks to complete execution.

We measure the execution time of three asynchronous methods.

The Task.WaitAll waits for all of the provided tasks to complete execution.

C# Task.ContinueWith

The Task.ContinueWith creates a continuation that executes asynchronously when the target Task<TResult> completes.

In the example, we chain two operations with ContinueWith .

C# mulitple async requests

The HttpClient class is used for sending HTTP requests and receiving HTTP responses from the specified resource.

We send asynchronous GET requests to various web pages and get their response status codes.

The GetAsync sends a GET request to the specified url and returns the response body in an asynchronous operation. It returns a new task. The task is added to the list of tasks.

The await unwraps the result of the operation.

We print the status of each request.

Task class - language reference

In this article we have used Task for concurrent operations in C#.

My name is Jan Bodnar and I am a passionate programmer with many years of programming experience. I have been writing programming articles since 2007. So far, I have written over 1400 articles and 8 e-books. I have over eight years of experience in teaching programming.

List all C# tutorials .

How-To Geek

How do tasks work in c# async/background threads.

If you want to make web requests in C#, or just want to do some background processing, you'll need to use asynchronous background tasks to not block up the main thread.

Quick Links

What is async/await, what are tasks.

If you want to make web requests in C#, or just want to do some background processing, you'll need to use asynchronous background tasks to not block up the main thread. We'll discuss what they are, and how to use them.

To use Tasks, you must first understand the concept of async / await . C# tasks don't have to run asynchronously, but considering the sole purpose of them is to represent an asynchronous operation, they almost always will run async. You don't want to run operations like fetching web requests and writing to hard drives on the main thread, because it would hold up the rest of the application (including the UI) while waiting for the result.

async / await  is special syntax used to deal with asynchronous operations. If a function is marked as async , it will usually return a Task, except in cases of event handlers that return void .

Inside the async function, you can use the  await  keyword to wait for async operations to finish without blocking the whole thread. Everything that comes after the await  keyword will only run after the

 operation finishes.

public async Task FetchWebResponse(string url)

var response = await SendRequest(url)

The value being awaited must be a Task, as the two go hand in hand with each other. When you call the SendRequest()  function, it returns a Task<T> , and the program waits until that task finishes. You can think of await  as a keyword used to return or wait for the value of a task to finish.

Tasks are wrappers used to deal with asynchronous functions. They essentially represent a value that will be returned in the future. You can use the await  keyword to wait for the result, or access it directly by checking if Task.IsCompleted  and then reading the value of Task.Result .

You can create them by writing an async function with a return type of Task<T> . Then, all you have to do is return a value of type T, and .NET will interpret that as returning a Task. You can use await  inside this task to wait for async operations, which in turn return a task themselves.

You can start running a Task using Task.Run(Action action) . This will queue up the Task on the thread pool, which will run in the background on a different thread. The thread pool takes a queue of tasks, and assigns them to CPU threads for processing. Once they return, they're put into the list of completed tasks where their values can be accessed.

However, even though it's on a background thread, it's still very important to use async/await . If you make a blocking call to an API on a background thread, and don't await  it, .NET will keep that thread blocked until it completes, filling up the thread pool with useless threads doing nothing but hurting performance.

If you need to await a task from the UI thread, start it with Task.Run , then check regularly to see if the task has been completed. If it has, you can handle the value.

You can also run and await tasks inside other Tasks. For example, say you have a function inside a task, DoExpensiveCalculation() , that takes a while to execute. Rather than processing it synchronously, you can write it as a Task, and queue up a background thread at the beginning of the main task. Then, when you need the value from that calculation, you can simply await  the task, and it will yield until the task is completed, and the return value is returned.

task method in c#

C# Asynchronous Programming , C#.Net

Tasks in C#

Tasks in C# is an object that represents an operation or work which executes in asynchronous manner. It was introduced in .NET framework 4.0 to support asynchronous programming.

It’s a basic component of the Task Parallel Library (TPL) and is mostly used for asynchronous programming and parallel processing. In this blog, we will explore more details regarding Task and its usage.

What is a Task in C#?

In C#, Task is basically used to implement asynchronous programming model, the .NET runtime executes a Task object asynchronously on a separate thread from the thread pool.

  • Whenever you create a Task , the TPL (Task Parallel Library) instructs the Task Scheduler to execute the operation in a separate thread.
  • The Task Scheduler is responsible to execute the Task,  by default it requests a worker thread from the Thread Pool to execute the Task.
  • The thread pool manager determines whether to create a new thread or reuse an existing thread from the thread pool to execute the operation.
  • The  TPL (Task Parallel Library)  abstracts the complexity of managing threads and synchronization, it allows you to focus on defining tasks and the high-level structure of their asynchronous code.


Creating Tasks in C#?

To create a Task in C#, first you need to import System.Threading.Tasks namespace into your program, then you can use the Task class to create object and access its properties.

Example – 1: Creating Tasks in C# using Task class and Start method.

In below example,

  • We have created a Task object as Task t , passing the method PrintEvenNumbers in the constructor.
  • Finally, we invoke the Task t by t . Start ( )  method.

When we run above example, it generates below output.

Tasks in C# OutPut 1.0

The output clearly demonstrates that the Main program executes in thread number: 1 and the Task executes in a separate thread having thread number as 3.

Different ways of creating Tasks in C#:

There are various ways available in C#.Net 4.0 to create a Task object. Please find some of the different ways as follows.

1) Task creation using Factory method:   You can use Task . Factory ( ) method to creates a Task instance and invoke it in a single line of code, as follows.

2) Task creation using Action Generic Delegate:  You can use below syntax to create a Task using Action type ( Generic Delegate ).

3) Task creation using a Delegate:  In below code sample, we are using delegate keyword to create a Task instance and then invoking it.

Similarly, we can also use Task . Run ( ) method to create a Task object and invoke it in single line, as follows.

4) Task creation using Lambda and named method:  In below code sample, we are using lambda and named method expression to create a Task instance and then invoking it.

Example – 2: Example of a Task creation and execution flow

Let’s go through an example and see the Task creation and execution in detail.

  • We have created a method as PrintOddEvenNumbers that prints odd and even numbers.
  • In the Main method, we have created a task instance Task t = new Task ( PrintOddEvenNumbers ) ; , assigned it to the method and then invoking it by calling t . start ( )  .
  • If you look at the console output, once the Task executes successfully, it prints odd and even numbers less than 10 on the console window.

Tasks in C# Out Put 1.1

In the above output window, if you look at the messages, then it’s clear that the Task executed on a separated Child thread ( Thread  Number: 3 ), whereas Main method executed on main thread ( Thread ID : 1 ).

Also, if you observe above result,

  • Both threads ( Main and Child ) started its execution simultaneously (asynchronously), however the Main thread didn’t wait for the Child thread to complete.
  • The Child  thread continued its execution until it finishes its task even after completion of Main thread execution.

If you want to make the Main thread execution to wait until the other tasks complete its execution, you can do that by using Task . Wait method. To know more about this, please check my next article Tasks in C#  Extended.

Key Points of Tasks in C#:

Following is some of the key points that you need to remember about Tasks.

  • You can use Task to run multiple operations concurrently, using Task you can continue executing other tasks or operations while the Task runs in the background.
  • If a Task produces a result, you can access it using the Result property, however this can cause blocking if the task isn’t completed.
  • You can chain tasks together using continuations, specifying what should happen next when a Task completes, enhancing workflow control.
  • Exceptions thrown within a Task are captured and stored until the task is awaited or observed. You should handle exceptions explicitly using try and catch blocks.
  • You can gracefully cancel a Task using cancellation tokens, which is useful for managing tasks that may take a longer time.


In conclusion, a Task in C# helps you manage asynchronous and parallel programming, it makes your program more responsive and efficient when dealing with time-consuming operations. Please check our next article here on below topic.

  • How to wait for Task to complete?
  • Get return value from Tasks?

Share this:

2 thoughts on “ tasks in c# ”.

Add Comment

Very helpful blog 👍

Amazing explanation Task article.

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Discover more from

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

task method in c#

How Async/Await Really Works in C#

Stephen Toub - MSFT

March 16th, 2023 81 155

Several weeks ago, the .NET Blog featured a post What is .NET, and why should you choose it? . It provided a high-level overview of the platform, summarizing various components and design decisions, and promising more in-depth posts on the covered areas. This post is the first such follow-up, deep-diving into the history leading to, the design decisions behind, and implementation details of async / await in C# and .NET.

The support for async / await has been around now for over a decade. In that time, it’s transformed how scalable code is written for .NET, and it’s both viable and extremely common to utilize the functionality without understanding exactly what’s going on under the covers. You start with a synchronous method like the following (this method is “synchronous” because a caller will not be able to do anything else until this whole operation completes and control is returned back to the caller):

Then you sprinkle a few keywords, change a few method names, and you end up with the following asynchronous method instead (this method is “asynchronous” because control is expected to be returned back to its caller very quickly and possibly before the work associated with the whole operation has completed):

Almost identical in syntax, still able to utilize all of the same control flow constructs, but now non-blocking in nature, with a significantly different underlying execution model, and with all the heavy lifting done for you under the covers by the C# compiler and core libraries.

While it’s common to use this support without knowing exactly what’s happening under the hood, I’m a firm believer that understanding how something actually works helps you to make even better use of it. For async / await in particular, understanding the mechanisms involved is especially helpful when you want to look below the surface, such as when you’re trying to debug things gone wrong or improve the performance of things otherwise gone right. In this post, then, we’ll deep-dive into exactly how await works at the language, compiler, and library level, so that you can make the most of these valuable features.

To do that well, though, we need to go way back to before async / await to understand what state-of-the-art asynchronous code looked like in its absence. Fair warning, it wasn’t pretty.

In the beginning…

All the way back in .NET Framework 1.0, there was the Asynchronous Programming Model pattern, otherwise known as the APM pattern, otherwise known as the Begin/End pattern, otherwise known as the IAsyncResult pattern. At a high-level, the pattern is simple. For a synchronous operation DoStuff :

there would be two corresponding methods as part of the pattern: a BeginDoStuff method and an EndDoStuff method:

BeginDoStuff would accept all of the same parameters as does DoStuff , but in addition it would also accept an AsyncCallback delegate and an opaque state object , one or both of which could be null . The Begin method was responsible for initiating the asynchronous operation, and if provided with a callback (often referred to as the “continuation” for the initial operation), it was also responsible for ensuring the callback was invoked when the asynchronous operation completed. The Begin method would also construct an instance of a type that implemented IAsyncResult , using the optional state to populate that IAsyncResult ‘s AsyncState property:

This IAsyncResult instance would then both be returned from the Begin method as well as passed to the AsyncCallback when it was eventually invoked. When ready to consume the results of the operation, a caller would then pass that IAsyncResult instance to the End method, which was responsible for ensuring the operation was completed (synchronously waiting for it to complete by blocking if it wasn’t) and then returning any result of the operation, including propagating any errors/exceptions that may have occurred. Thus, instead of writing code like the following to perform the operation synchronously:

the Begin/End methods could be used in the following manner to perform the same operation asynchronously:

For anyone who’s dealt with callback-based APIs in any language, this should feel familiar.

Things only got more complicated from there, however. For instance, there’s the issue of “stack dives.” A stack dive is when code repeatedly makes calls that go deeper and deeper on the stack, to the point where it could potentially stack overflow. The Begin method is allowed to invoke the callback synchronously if the operation completes synchronously, meaning the call to Begin might itself directly invoke the callback. And “asynchronous” operations that complete synchronously are actually very common; they’re not “asynchronous” because they’re guaranteed to complete asynchronously but rather are just permitted to. For example, consider an asynchronous read from some networked operation, like receiving from a socket. If you need only a small amount of data for each individual operation, such as reading some header data from a response, you might put a buffer in place in order to avoid the overhead of lots of system calls. Instead of doing a small read for just the amount of data you need immediately, you perform a larger read into the buffer and then consume data from that buffer until its exhausted; that lets you reduce the number of expensive system calls required to actually interact with the socket. Such a buffer might exist behind whatever asynchronous abstraction you’re using, such that the first “asynchronous” operation you perform (filling the buffer) completes asynchronously, but then all subsequent operations until that underlying buffer is exhausted don’t actually need to do any I/O, instead just pulling from the buffer, and can thus all complete synchronously. When the Begin method performs one of these operations, and finds it completes synchronously, it can then invoke the callback synchronously. That means you have one stack frame that called the Begin method, another stack frame for the Begin method itself, and now another stack frame for the callback. Now what happens if that callback turns around and calls Begin again? If that operation completes synchronously and its callback is invoked synchronously, you’re now again several more frames deep on the stack. And so on, and so on, until eventually you run out of stack.

This is a real possibility that’s easy to repro. Try this program on .NET Core:

Stack overflow due to improper handling of synchronous completion

So, compensation for this was built into the APM model. There are two possible ways to compensate for this:

  • Don’t allow the AsyncCallback to be invoked synchronously. If it’s always invoked asynchronously, even if the operation completes synchronously, then the risk of stack dives goes away. But so too does performance, because operations that complete synchronously (or so quickly that they’re observably indistinguishable) are very common, and forcing each of those to queue its callback adds measurable overhead.
  • Employ a mechanism that allows the caller rather than the callback to do the continuation work if the operation completes synchronously. That way, you escape the extra method frame and continue doing the follow-on work no deeper on the stack.

The APM pattern goes with option (2). For that, the IAsyncResult interface exposes two related but distinct members: IsCompleted and CompletedSynchronously . IsCompleted tells you whether the operation has completed: you can check it multiple times, and eventually it’ll transition from false to true and then stay there. In contrast, CompletedSynchronously never changes (or if it does, it’s a nasty bug waiting to happen); it’s used to communicate between the caller of the Begin method and the AsyncCallback which of them is responsible for performing any continuation work. If CompletedSynchronously is false , then the operation is completing asynchronously and any continuation work in response to the operation completing should be left up to the callback; after all, if the work didn’t complete synchronously, the caller of Begin can’t really handle it because the operation isn’t known to be done yet (and if the caller were to just call End, it would block until the operation completed). If, however, CompletedSynchronously is true , if the callback were to handle the continuation work, then it risks a stack dive, as it’ll be performing that continuation work deeper on the stack than where it started. Thus, any implementations at all concerned about such stack dives need to examine CompletedSynchronously and have the caller of the Begin method do the continuation work if it’s true , which means the callback then needs to not do the continuation work. This is also why CompletedSynchronously must never change: the caller and the callback need to see the same value to ensure that the continuation work is performed once and only once, regardless of race conditions.

In our previous DoStuff example, that then leads to code like this:

That’s a mouthful. And so far we’ve only looked at consuming the pattern… we haven’t looked at implementing the pattern. While most developers wouldn’t need to be concerned about leaf operations (e.g. implementing the actual Socket.BeginReceive / EndReceive methods that interact with the operating system), many, many developers would need to be concerned with composing these operations (performing multiple asynchronous operations that together form a larger one), which means not only consuming other Begin/End methods but also implementing them yourself so that your composition itself can be consumed elsewhere. And, you’ll notice there was no control flow in my previous DoStuff example. Introduce multiple operations into this, especially with even simple control flow like a loop, and all of a sudden this becomes the domain of experts that enjoy pain, or blog post authors trying to make a point.

So just to drive that point home, let’s implement a complete example. At the beginning of this post, I showed a CopyStreamToStream method that copies all of the data from one stream to another (à la Stream.CopyTo , but, for the sake of explanation, assuming that doesn’t exist):

Straightforward: we repeatedly read from one stream and then write the resulting data to the other, read from one stream and write to the other, and so on, until we have no more data to read. Now, how would we implement this asynchronously using the APM pattern? Something like this:

Yowsers. And, even with all of that gobbledygook, it’s still not a great implementation. For example, the IAsyncResult implementation is locking on every operation rather than doing things in a more lock-free manner where possible, the Exception is being stored raw rather than as an ExceptionDispatchInfo that would enable augmenting its call stack when propagated, there’s a lot of allocation involved in each individual operation (e.g. a delegate being allocated for each BeginWrite call), and so on. Now, imagine having to do all of this for each method you wanted to write. Every time you wanted to write a reusable method that would consume another asynchronous operation, you’d need to do all of this work. And if you wanted to write reusable combinators that could operate over multiple discrete IAsyncResult s efficiently (think Task.WhenAll ), that’s another level of difficulty; every operation implementing and exposing its own APIs specific to that operation meant there was no lingua franca for talking about them all similarly (though some developers wrote libraries that tried to ease the burden a bit, typically via another layer of callbacks that enabled the API to supply an appropriate AsyncCallback to a Begin method).

And all of that complication meant that very few folks even attempted this, and for those who did, well, bugs were rampant. To be fair, this isn’t really a criticism of the APM pattern. Rather, it’s a critique of callback-based asynchrony in general. We’re all so used to the power and simplicity that control flow constructs in modern languages provide us with, and callback-based approaches typically run afoul of such constructs once any reasonable amount of complexity is introduced. No other mainstream language had a better alternative available, either.

We needed a better way, one in which we learned from the APM pattern, incorporating the things it got right while avoiding its pitfalls. An interesting thing to note is that the APM pattern is just that, a pattern; the runtime, core libraries, and compiler didn’t provide any assistance in consuming or implementing the pattern.

Event-Based Asynchronous Pattern

.NET Framework 2.0 saw a few APIs introduced that implemented a different pattern for handling asynchronous operations, one primarily intended for doing so in the context of client applications. This Event-based Asynchronous Pattern, or EAP, also came as a pair of members (at least, possibly more), this time a method to initiate the asynchronous operation and an event to listen for its completion. Thus, our earlier DoStuff example might have been exposed as a set of members like this:

You’d register your continuation work with the DoStuffCompleted event and then invoke the DoStuffAsync method; it would initiate the operation, and upon that operation’s completion, the DoStuffCompleted event would be raised asynchronously from the caller. The handler could then run its continuation work, likely validating that the userToken supplied matched the one it was expecting, enabling multiple handlers to be hooked up to the event at the same time.

This pattern made a few use cases a bit easier while making other uses cases significantly harder (and given the previous APM CopyStreamToStream example, that’s saying something). It didn’t get rolled out in a widespread manner, and it came and went effectively in a single release of .NET Framework, albeit leaving behind the APIs added during its tenure, like Ping.SendAsync / Ping.PingCompleted :

However, it did add one notable advance that the APM pattern didn’t factor in at all, and that has endured into the models we embrace today: SynchronizationContext .

SynchronizationContext was also introduced in .NET Framework 2.0, as an abstraction for a general scheduler. In particular, SynchronizationContext ‘s most used method is Post , which queues a work item to whatever scheduler is represented by that context. The base implementation of SynchronizationContext , for example, just represents the ThreadPool , and so the base implementation of SynchronizationContext.Post simply delegates to ThreadPool.QueueUserWorkItem , which is used to ask the ThreadPool to invoke the supplied callback with the associated state on one the pool’s threads. However, SynchronizationContext ‘s bread-and-butter isn’t just about supporting arbitrary schedulers, rather it’s about supporting scheduling in a manner that works according to the needs of various application models.

Consider a UI framework like Windows Forms. As with most UI frameworks on Windows, controls are associated with a particular thread, and that thread runs a message pump which runs work that’s able to interact with those controls: only that thread should try to manipulate those controls, and any other thread that wants to interact with the controls should do so by sending a message to be consumed by the UI thread’s pump. Windows Forms makes this easy with methods like Control.BeginInvoke , which queues the supplied delegate and arguments to be run by whatever thread is associated with that Control . You can thus write code like this:

That will offload the ComputeMessage() work to be done on a ThreadPool thread (so as to keep the UI responsive while it’s being processed), and then when that work has completed, queue a delegate back to the thread associated with button1 to update button1 ‘s label. Easy enough. WPF has something similar, just with its Dispatcher type:

And .NET MAUI has something similar. But what if I wanted to put this logic into a helper method? e.g.

I could then use that like this:

but how could ComputeMessageAndInvokeUpdate be implemented in such a way that it could work in any of those applications? Would it need to be hardcoded to know about every possible UI framework? That’s where SynchronizationContext shines. We might implement the method like this:

That uses the SynchronizationContext as an abstraction to target whatever “scheduler” should be used to get back to the necessary environment for interacting with the UI. Each application model then ensures it’s published as SynchronizationContext.Current a SynchronizationContext -derived type that does the “right thing.” For example, Windows Forms has this :

and WPF has this :

ASP.NET used to have one , which didn’t actually care about what thread work ran on, but rather that work associated with a given request was serialized such that multiple threads wouldn’t concurrently be accessing a given HttpContext :

This also isn’t limited to such main application models. For example, xunit is a popular unit testing framework, one that .NET’s core repos use for their unit testing, and it also employs multiple custom SynchronizationContext s. You can, for example, allow tests to run in parallel but limit the number of tests that are allowed to be running concurrently. How is that enabled? Via a SynchronizationContext :

MaxConcurrencySyncContext ‘s Post method just queues the work to its own internal work queue, which it then processes on its own worker threads, where it controls how many there are based on the max concurrency desired. You get the idea.

How does this tie in with the Event-based Asynchronous Pattern? Both EAP and SynchronizationContext were introduced at the same time, and the EAP dictated that the completion events should be queued to whatever SynchronizationContext was current when the asynchronous operation was initiated. To simplify that ever so slightly (and arguably not enough to warrant the extra complexity), some helper types were also introduced in System.ComponentModel , in particular AsyncOperation and AsyncOperationManager . The former was just a tuple that wrapped the user-supplied state object and the captured SynchronizationContext , and the latter just served as a simple factory to do that capture and create the AsyncOperation instance. Then EAP implementations would use those, e.g. Ping.SendAsync called AsyncOperationManager.CreateOperation to capture the SynchronizationContext , and then when the operation completed, the AsyncOperation ‘s PostOperationCompleted method would be invoked to call the stored SynchronizationContext ‘s Post method.

SynchronizationContext provides a few more trinkets worthy of mention as they’ll show up again in a bit. In particular, it exposes OperationStarted and OperationCompleted methods. The base implementation of these virtuals are empty, doing nothing, but a derived implementation might override these to know about in-flight operations. That means EAP implementations would also invoke these OperationStarted / OperationCompleted at the beginning and end of each operation, in order to inform any present SynchronizationContext and allow it to track the work. This is particularly relevant to the EAP pattern because the methods that initiate the async operations are void returning: you get nothing back that allows you to track the work individually. We’ll get back to that.

So, we needed something better than the APM pattern, and the EAP that came next introduced some new things but didn’t really address the core problems we faced. We still needed something better.

Enter Tasks

.NET Framework 4.0 introduced the System.Threading.Tasks.Task type. At its heart, a Task is just a data structure that represents the eventual completion of some asynchronous operation (other frameworks call a similar type a “promise” or a “future”). A Task is created to represent some operation, and then when the operation it logically represents completes, the results are stored into that Task . Simple enough. But the key feature that Task provides that makes it leaps and bounds more useful than IAsyncResult is that it builds into itself the notion of a continuation. That one feature means you can walk up to any Task and ask to be notified asynchronously when it completes, with the task itself handling the synchronization to ensure the continuation is invoked regardless of whether the task has already completed, hasn’t yet completed, or is completing concurrently with the notification request. Why is that so impactful? Well, if you remember back to our discussion of the old APM pattern, there were two primary problems.

  • You had to implement a custom IAsyncResult implementation for every operation: there was no built-in IAsyncResult implementation anyone could just use for their needs.
  • You had to know prior to the Begin method being called what you wanted to do when it was complete. This makes it a significant challenge to implement combinators and other generalized routines for consuming and composing arbitrary async implementations.

In contrast, with Task , that shared representation lets you walk up to an async operation after you’ve already initiated the operation and provide a continuation after you’ve already initiated the operation… you don’t need to provide that continuation to the method that initiates the operation. Everyone who has asynchronous operations can produce a Task , and everyone who consumes asynchronous operations can consume a Task , and nothing custom needs to be done to connect the two: Task becomes the lingua franca for enabling producers and consumers of asynchronous operations to talk. And that has changed the face of .NET. More on that in a bit…

For now, let’s better understand what this actually means. Rather than dive into the intricate code for Task , we’ll do the pedagogical thing and just implement a simple version. This isn’t meant to be a great implementation, rather only complete enough functionally to help understand the meat of what is a Task , which, at the end of the day, is really just a data structure that handles coordinating the setting and reception of a completion signal. We’ll start with just a few fields:

We need a field to know whether the task has completed ( _completed ), and we need a field to store any error that caused the task to fail ( _error ); if we were also implementing a generic MyTask<TResult> , there’d also be a private TResult _result field for storing the successful result of the operation. Thus far, this looks a lot like our custom IAsyncResult implementation earlier (not a coincidence, of course). But now the pièce de résistance, the _continuation field. In this simple implementation, we’re supporting just a single continuation, but that’s enough for explanatory purposes (the real Task employs an object field that can either be an individual continuation object or a List<> of continuation objects). This is a delegate that will be invoked when the task completes.

Now, a bit of surface area. As noted, one of the fundamental advances in Task over previous models was the ability to supply the continuation work (the callback) after the operation was initiated. We need a method to let us do that, so let’s add ContinueWith :

If the task has already been marked completed by the time ContinueWith is called, ContinueWith just queues the execution of the delegate. Otherwise, the method stores the delegate, such that the continuation may be queued when the task completes (it also stores something called an ExecutionContext , and then uses that when the delegate is later invoked, but don’t worry about that part for now… we’ll get to it). Simple enough.

Then we need to be able to mark the MyTask as completed, meaning whatever asynchronous operation it represents has finished. For that, we’ll expose two methods, one to mark it completed successfully (“SetResult”), and one to mark it completed with an error (“SetException”):

We store any error, we mark the task as having been completed, and then if a continuation had previously been registered, we queue it to be invoked.

Finally, we need a way to propagate any exception that may have occurred in the task (and, if this were a generic MyTask<T> , to return its _result ); to facilitate certain scenarios, we also allow this method to block waiting for the task to complete, which we can implement in terms of ContinueWith (the continuation just signals a ManualResetEventSlim that the caller then blocks on waiting for completion).

And that’s basically it. Now to be sure, the real Task is way more complicated, with a much more efficient implementation, with support for any number of continuations, with a multitude of knobs about how it should behave (e.g. should continuations be queued as is being done here or should they be invoked synchronously as part of the task’s completion), with the ability to store multiple exceptions rather than just one, with special knowledge of cancellation, with tons of helper methods for doing common operations (e.g. Task.Run which creates a Task to represent a delegate queued to be invoked on the thread pool), and so on. But there’s no magic to any of that; at its core, it’s just what we saw here.

You might also notice that my simple MyTask has public SetResult / SetException methods directly on it, whereas Task doesn’t. Actually, Task does have such methods, they’re just internal , with a System.Threading.Tasks.TaskCompletionSource type serving as a separate “producer” for the task and its completion; that was done not out of technical necessity but as a way to keep the completion methods off of the thing meant only for consumption. You can then hand out a Task without having to worry about it being completed out from under you; the completion signal is an implementation detail of whatever created the task and also reserves the right to complete it by keeping the TaskCompletionSource to itself. ( CancellationToken and CancellationTokenSource follow a similar pattern: CancellationToken is just a struct wrapper for a CancellationTokenSource , serving up only the public surface area related to consuming a cancellation signal but without the ability to produce one, which is a capability restricted to whomever has access to the CancellationTokenSource .)

Of course, we can implement combinators and helpers for this MyTask similar to what Task provides. Want a simple MyTask.WhenAll ? Here you go:

Want a MyTask.Run ? You got it:

How about a MyTask.Delay ? Sure:

You get the idea.

With Task in place, all previous async patterns in .NET became a thing of the past. Anywhere an asynchronous implementation previously was implemented with the APM pattern or the EAP pattern, new Task -returning methods were exposed.

And ValueTasks

Task continues to be the workhorse for asynchrony in .NET to this day, with new methods exposed every release and routinely throughout the ecosystem that return Task and Task<TResult> . However, Task is a class, which means creating one does come with an allocation. For the most part, one extra allocation for a long-lived asynchronous operation is a pittance and won’t meaningfully impact performance for all but the most performance-sensitive operations. However, as was previously noted, synchronous completion of asynchronous operations is fairly common. Stream.ReadAsync was introduced to return a Task<int> , but if you’re reading from, say, a BufferedStream , there’s a really good chance many of your reads are going to complete synchronously due to simply needing to pull data from an in-memory buffer rather than performing syscalls and real I/O. Having to allocate an additional object just to return such data is unfortunate (note it was the case with APM as well). For non-generic Task -returning methods, the method can just return a singleton already-completed task, and in fact one such singleton is provided by Task in the form of Task.CompletedTask . But for Task<TResult> , it’s impossible to cache a Task for every possible TResult . What can we do to make such synchronous completion faster?

It is possible to cache some Task<TResult> s. For example, Task<bool> is very common, and there’s only two meaningful things to cache there: a Task<bool> when the Result is true and one when the Result is false . Or while we wouldn’t want to try caching four billion Task<int> s to accommmodate every possible Int32 result, small Int32 values are very common, so we could cache a few for, say, -1 through 8. Or for arbitrary types, default is a reasonably common value, so we could cache a Task<TResult> where Result is default(TResult) for every relevant type. And in fact, Task.FromResult does that today (as of recent versions of .NET), using a small cache of such reusable Task<TResult> singletons and returning one of them if appropriate or otherwise allocating a new Task<TResult> for the exact provided result value. Other schemes can be created to handle other reasonably common cases. For example, when working with Stream.ReadAsync , it’s reasonably common to call it multiple times on the same stream, all with the same count for the number of bytes allowed to be read. And it’s reasonably common for the implementation to be able to fully satisfy that count request. Which means it’s reasonably common for Stream.ReadAsync to repeatedly return the same int result value. To avoid multiple allocations in such scenarios, multiple Stream types (like MemoryStream ) will cache the last Task<int> they successfully returned, and if the next read ends up also completing synchronously and successfully with the same result, it can just return the same Task<int> again rather than creating a new one. But what about other cases? How can this allocation for synchronous completions be avoided more generally in situations where the performance overhead really matters?

That’s where ValueTask<TResult> comes into the picture ( a much more detailed examination of ValueTask<TResult> is also available). ValueTask<TResult> started life as a discriminated union between a TResult and a Task<TResult> . At the end of the day, ignoring all the bells and whistles, that’s all it is (or, rather, was), either an immediate result or a promise for a result at some point in the future:

A method could then return such a ValueTask<TResult> instead of a Task<TResult> , and at the expense of a larger return type and a little more indirection, avoid the Task<TResult> allocation if the TResult was known by the time it needed to be returned.

There are, however, super duper extreme high-performance scenarios where you want to be able to avoid the Task<TResult> allocation even in the asynchronous-completion case. For example, Socket lives at the bottom of the networking stack, and SendAsync and ReceiveAsync on sockets are on the super hot path for many a service, with both synchronous and asynchronous completions being very common (most sends complete synchronously, and many receives complete synchronously due to data having already been buffered in the kernel). Wouldn’t it be nice if, on a given Socket , we could make such sending and receiving allocation-free, regardless of whether the operations complete synchronously or asynchronously?

That’s where System.Threading.Tasks.Sources.IValueTaskSource<TResult> enters the picture:

The IValueTaskSource<TResult> interface allows an implementation to provide its own backing object for a ValueTask<TResult> , enabling the object to implement methods like GetResult to retrieve the result of the operation and OnCompleted to hook up a continuation to the operation. With that, ValueTask<TResult> evolved a small change to its definition , with its Task<TResult>? _task field replaced by an object? _obj field:

Whereas the _task field was either a Task<TResult> or null, the _obj field now can also be an IValueTaskSource<TResult> . Once a Task<TResult> is marked as completed, that’s it, it will remain completed and never transition back to an incomplete state. In contrast, an object implementing IValueTaskSource<TResult> has full control over the implementation, and is free to transition bidirectionally between complete and incomplete states, as ValueTask<TResult> ‘s contract is that a given instance may be consumed only once, thus by construction it shouldn’t observe a post-consumption change in the underlying instance (this is why analysis rules like CA2012 exist). This then enables types like Socket to pool IValueTaskSource<TResult> instances to use for repeated calls. Socket caches up to two such instances, one for reads and one for writes, since the 99.999% case is to have at most one receive and one send in-flight at the same time.

I mentioned ValueTask<TResult> but not ValueTask . When dealing only with avoiding allocation for synchronous completion, there’s little performance benefit to a non-generic ValueTask (representing result-less, void operations), since the same condition can be represented with Task.CompletedTask . But once we care about the ability to use a poolable underlying object for avoiding allocation in asynchronous completion case, that then also matters for the non-generic. Thus, when IValueTaskSource<TResult> was introduced, so too were IValueTaskSource and ValueTask .

So, we have Task , Task<TResult> , ValueTask , and ValueTask<TResult> . We’re able to interact with them in various ways, representing arbitrary asynchronous operations and hooking up continuations to handle the completion of those asynchronous operations. And yes, we can do so before or after the operation completes.

But … those continuations are still callbacks!

We’re still forced into a continuation-passing style for encoding our asynchronous control flow!!

It’s still really hard to get right!!!

How can we fix that????

C# Iterators to the Rescue

The glimmer of hope for that solution actually came about a few years before Task hit the scene, with C# 2.0, when it added support for iterators.

“Iterators?” you ask? “You mean for IEnumerable<T> ?” That’s the one. Iterators let you write a single method that is then used by the compiler to implement an IEnumerable<T> and/or an IEnumerator<T> . For example, if I wanted to create an enumerable that yielded the Fibonacci sequence, I might write something like this:

I can then enumerate this with a foreach :

I can compose it with other IEnumerable<T> s via combinators like those on System.Linq.Enumerable :

Or I can just manually enumerate it directly via an IEnumerator<T> :

All of the above result in this output:

The really interesting thing about this is that in order to achieve the above, we need to be able to enter and exit that Fib method multiple times. We call MoveNext , it enters the method, the method then executes until it encounters a yield return , at which point the call to MoveNext needs to return true and a subsequent access to Current needs to return the yielded value. Then we call MoveNext again, and we need to be able to pick up in Fib just after where we last left off, and with all of the state from the previous invocation intact. Iterators are effectively coroutines provided by the C# language / compiler, with the compiler expanding my Fib iterator into a full-blown state machine:

All of the logic for Fib is now inside of the MoveNext method, but as part of a jump table that lets the implementation branch to where it last left off, which is tracked in a generated state field on the enumerator type. And the variables I wrote as locals, like prev , next , and sum , have been “lifted” to be fields on the enumerator, so that they may persist across invocations of MoveNext .

(Note that the previous code snippet showing how the C# compiler emits the implementation won’t compile as-is. The C# compiler synthesizes “unspeakable” names, meaning it names types and members it creates in a way that’s valid IL but invalid C#, so as not to risk conflicting with any user-named types and members. I’ve kept everything named as the compiler does, but if you want to experiment with compiling it, you can rename things to use valid C# names instead.)

In my previous example, the last form of enumeration I showed involved manually using the IEnumerator<T> . At that level, we’re manually invoking MoveNext() , deciding when it was an appropriate time to re-enter the coroutine. But… what if instead of invoking it like that, I could instead have the next invocation of MoveNext actually be part of the continuation work performed when an asynchronous operation completes? What if I could yield return something that represents an asynchronous operation and have the consuming code hook up a continuation to that yielded object where that continuation then does the MoveNext ? With such an approach, I could write a helper method like this:

Now this is getting interesting. We’re given an enumerable of tasks that we can iterate through. Each time we MoveNext to the next Task and get one, we then hook up a continuation to that Task ; when that Task completes, it’ll just turn around and call right back to the same logic that does a MoveNext , gets the next Task , and so on. This is building on the idea of Task as a single representation for any asynchronous operation, so the enumerable we’re fed can be a sequence of any asynchronous operations. Where might such a sequence come from? From an iterator, of course. Remember our earlier CopyStreamToStream example and how gloriously horrible the APM-based implementation was? Consider this instead:

Wow, this is almost legible. We’re calling that IterateAsync helper, and the enumerable we’re feeding it is one produced by an iterator that’s handling all the control flow for the copy. It calls Stream.ReadAsync and then yield return s that Task ; that yielded task is what will be handed off to IterateAsync after it calls MoveNext , and IterateAsync will hook a continuation up to that Task , which when it completes will then just call back into MoveNext and end up back in this iterator just after the yield . At that point, the Impl logic gets the result of the method, calls WriteAsync , and again yields the Task it produced. And so on.

And that, my friends, is the beginning of async / await in C# and .NET. Something around 95% of the logic in support of iterators and async / await in the C# compiler is shared. Different syntax, different types involved, but fundamentally the same transform. Squint at the yield return s, and you can almost see await s in their stead.

In fact, some enterprising developers used iterators in this fashion for asynchronous programming before async / await hit the scene. And a similar transformation was prototyped in the experimental Axum programming language, serving as a key inspiration for C#’s async support. Axum provided an async keyword that could be put onto a method, just like async can now in C#. Task wasn’t yet ubiquitous, so inside of async methods, the Axum compiler heuristically matched synchronous method calls to their APM counterparts, e.g. if it saw you calling stream.Read , it would find and utilize the corresponding stream.BeginRead and stream.EndRead methods, synthesizing the appropriate delegate to pass to the Begin method, while also generating a complete APM implementation for the async method being defined such that it was compositional. It even integrated with SynchronizationContext ! While Axum was eventually shelved, it served as an awesome and motivating prototype for what eventually became async / await in C#.

async / await under the covers

Now that we know how we got here, let’s dive in to how it actually works. For reference, here’s our example synchronous method again:

and again here’s what the corresponding method looks like with async / await :

A breadth of fresh air in comparison to everything we’ve seen thus far. The signature changed from void to async Task , we call ReadAsync and WriteAsync instead of Read and Write , respectively, and both of those operations are prefixed with await . That’s it. The compiler and the core libraries take over the rest, fundamentally changing how the code is actually executed. Let’s dive into how.

Compiler Transform

As we’ve already seen, as with iterators, the compiler rewrites the async method into one based on a state machine. We still have a method with the same signature the developer wrote ( public Task CopyStreamToStreamAsync(Stream source, Stream destination) ), but the body of that method is completely different:

Note that the only signature difference from what the dev wrote is the lack of the async keyword itself. async isn’t actually a part of the method signature; like unsafe , when you put it in the method signature, you’re expressing an implementation detail of the method rather than something that’s actually exposed as part of the contract. Using async / await to implement a Task -returning method is an implementation detail.

The compiler has generated a struct named <CopyStreamToStreamAsync>d__0 , and it’s zero-initialized an instance of that struct on the stack. Importantly, if the async method completes synchronously, this state machine will never have left the stack. That means there’s no allocation associated with the state machine unless the method needs to complete asynchronously, meaning it await s something that’s not yet completed by that point. More on that in a bit.

This struct is the state machine for the method, containing not only all of the transformed logic from what the developer wrote, but also fields for tracking the current position in that method as well as all of the “local” state the compiler lifted out of the method that needs to survive between MoveNext invocations. It’s the logical equivalent of the IEnumerable<T> / IEnumerator<T> implementation we saw in the iterator. (Note that the code I’m showing is from a release build; in debug builds the C# compiler will actually generate these state machine types as classes, as doing so can aid in certain debugging exercises).

After initializing the state machine, we see a call to AsyncTaskMethodBuilder.Create() . While we’re currently focused on Task s, the C# language and compiler allow for arbitrary types ( “task-like” types ) to be returned from async methods, e.g. I can write a method public async MyTask CopyStreamToStreamAsync , and it would compile just fine as long as we augment the MyTask we defined earlier in an appropriate way. That appropriateness includes declaring an associated “builder” type and associating it with the type via the AsyncMethodBuilder attribute:

In this context, such a “builder” is something that knows how to create an instance of that type (the Task property), complete it either successfully and with a result if appropriate ( SetResult ) or with an exception ( SetException ), and handle hooking up continuations to await ed things that haven’t yet completed ( AwaitOnCompleted / AwaitUnsafeOnCompleted ). In the case of System.Threading.Tasks.Task , it is by default associated with the AsyncTaskMethodBuilder . Normally that association is provided via an [AsyncMethodBuilder(...)] attribute applied to the type, but Task is known specially to C# and so isn’t actually adorned with that attribute. As such, the compiler has reached for the builder to use for this async method, and is constructing an instance of it using the Create method that’s part of the pattern. Note that as with the state machine, AsyncTaskMethodBuilder is also a struct, so there’s no allocation here, either.

The state machine is then populated with the arguments to this entry point method. Those parameters need to be available to the body of the method that’s been moved into MoveNext , and as such these arguments need to be stored in the state machine so that they can be referenced by the code on the subsequent call to MoveNext . The state machine is also initialized to be in the initial -1 state. If MoveNext is called and the state is -1 , we’ll end up starting logically at the beginning of the method.

Now the most unassuming but most consequential line: a call to the builder’s Start method. This is another part of the pattern that must be exposed on a type used in the return position of an async method, and it’s used to perform the initial MoveNext on the state machine. The builder’s Start method is effectively just this:

such that calling stateMachine.<>t__builder.Start(ref stateMachine); is really just calling stateMachine.MoveNext() . In which case, why doesn’t the compiler just emit that directly? Why have Start at all? The answer is that there’s a tad bit more to Start than I let on. But for that, we need to take a brief detour into understanding ExecutionContext .


We’re all familiar with passing around state from method to method. You call a method, and if that method specifies parameters, you call the method with arguments in order to feed that data into the callee. This is explicitly passing around data. But there are other more implicit means. For example, rather than passing data as arguments, a method could be parameterless but could dictate that some specific static fields may be populated prior to making the method call, and the method will pull state from there. Nothing about the method’s signature indicates it takes arguments, because it doesn’t: there’s just an implicit contract between the caller and callee that the caller might populate some memory locations and the callee might read those memory locations. The callee and the caller may not even realize it’s happening if they’re intermediaries, e.g. method A might populate the statics and then call B which calls C which calls D which eventually calls E that reads the values of those statics. This is often referred to as “ambient” data: it’s not passed to you via parameters but rather is just sort of hanging out there and available for you to consume if desired.

We can take this a step further, and use thread-local state. Thread-local state, which in .NET is achieved via static fields attributed as [ThreadStatic] or via the ThreadLocal<T> type, can be used in the same way, but with the data limited to just the current thread of execution, with every thread able to have its own isolated copy of those fields. With that, you could populate the thread static, make the method call, and then upon the method’s completion revert the changes to the thread static, enabling a fully isolated form of such implicitly passed data.

But, what about asynchrony? If we make an asynchronous method call and logic inside that asynchronous method wants to access that ambient data, how would it do so? If the data were stored in regular statics, the asynchronous method would be able to access it, but you could only ever have one such method in flight at a time, as multiple callers could end up overwriting each others’ state when they write to those shared static fields. If the data were stored in thread statics, the asynchronous method would be able to access it, but only up until the point where it stopped running synchronously on the calling thread; if it hooked up a continuation to some operation it initiated and that continuation ended up running on some other thread, it would no longer have access to the thread static information. Even if it did happen to run on the same thread, either by chance or because the scheduler forced it to, by the time it did it’s likely the data would have been removed and/or overwritten by some other operation initiated by that thread. For asynchrony, what we need is a mechanism that would allow arbitrary ambient data to flow across these asynchronous points, such that throughout an async method’s logic, wherever and whenever that logic might run, it would have access to that same data.

Enter ExecutionContext . The ExecutionContext type is the vehicle by which ambient data flows from async operation to async operation. It lives in a [ThreadStatic] , but then when some asynchronous operation is initiated, it’s “captured” (a fancy way of saying “read a copy from that thread static”), stored, and then when the continuation of that asynchronous operation is run, the ExecutionContext is first restored to live in the [ThreadStatic] on the thread which is about to run the operation. ExecutionContext is the mechanism by which AsyncLocal<T> is implemented (in fact, in .NET Core, ExecutionContext is entirely about AsyncLocal<T> , nothing more), such that if you store a value into an AsyncLocal<T> , and then for example queue a work item to run on the ThreadPool , that value will be visible in that AsyncLocal<T> inside of that work item running on the pool:

That will print 42 every time this is run. It doesn’t matter that the moment after we queue the delegate we reset the value of the AsyncLocal<int> back to 0, because the ExecutionContext was captured as part of the QueueUserWorkItem call, and that capture included the state of the AsyncLocal<int> at that exact moment. We can see this in more detail by implementing our own simple thread pool:

Here MyThreadPool has a BlockingCollection<(Action, ExecutionContext?)> that represents its work item queue, with each work item being the delegate for the work to be invoked as well as the ExecutionContext associated with that work. The static constructor for the pool spins up a bunch of threads, each of which just sits in an infinite loop taking the next work item and running it. If no ExecutionContext was captured for a given delegate, the delegate is just invoked directly. But if an ExecutionContext was captured, rather than invoking the delegate directly, we call the ExecutionContext.Run method, which will restore the supplied ExecutionContext as the current context prior to running the delegate, and will then reset the context afterwards. This example includes the exact same code with an AsyncLocal<int> previously shown, except this time using MyThreadPool instead of ThreadPool , yet it will still output 42 each time, because the pool is properly flowing ExecutionContext .

As an aside, you’ll note I called UnsafeStart in MyThreadPool ‘s static constructor. Starting a new thread is exactly the kind of asynchronous point that should flow ExecutionContext , and indeed, Thread ‘s Start method uses ExecutionContext.Capture to capture the current context, store it on the Thread , and then use that captured context when eventually invoking the Thread ‘s ThreadStart delegate. I didn’t want to do that in this example, though, as I didn’t want the Thread s to capture whatever ExecutionContext happened to be present when the static constructor ran (doing so could make a demo about ExecutionContext more convoluted), so I used the UnsafeStart method instead. Threading-related methods that begin with Unsafe behave exactly the same as the corresponding method that lacks the Unsafe prefix except that they don’t capture ExecutionContext , e.g. Thread.Start and Thread.UnsafeStart do identical work, but whereas Start captures ExecutionContext , UnsafeStart does not.

Back To Start

We took a detour into discussing ExecutionContext when I was writing about the implementation of AsyncTaskMethodBuilder.Start , which I said was effectively:

and then suggested I simplified a bit. That simplification was ignoring the fact that the method actually needs to factor ExecutionContext into things, and is thus more like this:

Rather than just calling stateMachine.MoveNext() as I’d previously suggested we did, we do a dance here of getting the current ExecutionContext , then invoking MoveNext , and then upon its completion resetting the current context back to what it was prior to the MoveNext invocation.

The reason for this is to prevent ambient data leakage from an async method out to its caller. An example method demonstrates why that matters:

“Impersonation” is the act of changing ambient information about the current user to instead be that of someone else; this lets code act on behalf of someone else, using their privileges and access. In .NET, such impersonation flows across asynchronous operations, which means it’s part of ExecutionContext . Now imagine if Start didn’t restore the previous context, and consider this code:

This code could find that the ExecutionContext modified inside of ElevateAsAdminAndRunAsync remains after ElevateAsAdminAndRunAsync returns to its synchronous caller (which happens the first time the method await s something that’s not yet complete). That’s because after calling Impersonate , we call DoSensitiveWorkAsync and await the task it returns. Assuming that task isn’t complete, it will cause the invocation of ElevateAsAdminAndRunAsync to yield and return to the caller, with the impersonation still in effect on the current thread. That is not something we want. As such, Start erects this guard that ensures any modifications to ExecutionContext don’t flow out of the synchronous method call and only flow along with any subsequent work performed by the method.

So, the entry point method was invoked, the state machine struct was initialized, Start was called, and that invoked MoveNext . What is MoveNext ? It’s the method that contains all of the original logic from the dev’s method, but with a whole bunch of changes. Let’s start just by looking at the scaffolding of the method. Here’s a decompiled version of what the compiler emit for our method, but with everything inside of the generated try block removed:

Whatever other work is performed by MoveNext , it has the responsibility of completing the Task returned from the async Task method when all of the work is done. If the body of the try block throws an exception that goes unhandled, then the task will be faulted with that exception. And if the async method successfully reaches its end (equivalent to a synchronous method returning), it will complete the returned task successfully. In either of those cases, it’s setting the state of the state machine to indicate completion. (I sometimes hear developers theorize that, when it comes to exceptions, there’s a difference between those thrown before the first await and after… based on the above, it should be clear that is not the case. Any exception that goes unhandled inside of an async method, no matter where it is in the method and no matter whether the method has yielded, will end up in the above catch block, with the caught exception then stored into the Task that’s returned from the async method.)

Also note that this completion is going through the builder, using its SetException and SetResult methods that are part of the pattern for a builder expected by the compiler. If the async method has previously suspended, the builder will have already had to manufacture a Task as part of that suspension handling (we’ll see how and where soon), in which case calling SetException / SetResult will complete that Task . If, however, the async method hasn’t previously suspended, then we haven’t yet created a Task or returned anything to the caller, so the builder has more flexibility in how it produces that Task . If you remember previously in the entry point method, the very last thing it does is return the Task to the caller, which it does by returning the result of accessing the builder’s Task property (so many things called “Task”, I know):

The builder knows if the method ever suspended, in which case it has a Task that was already created and just returns that. If the method never suspended and the builder doesn’t yet have a task, it can manufacture a completed task here. In this case, with a successful completion, it can just use Task.CompletedTask rather than allocating a new task, avoiding any allocation. In the case of a generic Task<TResult> , the builder can just use Task.FromResult<TResult>(TResult result) .

The builder can also do whatever translations it deems are appropriate to the kind of object it’s creating. For example, Task actually has three possible final states: success, failure, and canceled. The AsyncTaskMethodBuilder ‘s SetException method special-cases OperationCanceledException , transitioning the Task into a TaskStatus.Canceled final state if the exception provided is or derives from OperationCanceledException ; otherwise, the task ends as TaskStatus.Faulted . Such a distinction often isn’t apparent in consuming code; since the exception is stored into the Task regardless of whether it’s marked as Canceled or Faulted , code await ‘ing that Task will not be able to observe the difference between the states (the original exception will be propagated in either case)… it only affects code that interacts with the Task directly, such as via ContinueWith , which has overloads that enable a continuation to be invoked only for a subset of completion statuses.

Now that we understand the lifecycle aspects, here’s everything filled in inside the try block in MoveNext :

This kind of complication might feel a tad familiar. Remember how convoluted our manually-implemented BeginCopyStreamToStream based on APM was? This isn’t quite as complicated, but is also way better in that the compiler is doing the work for us, having rewritten the method in a form of continuation passing while ensuring that all necessary state is preserved for those continuations. Even so, we can squint and follow along. Remember that the state was initialized to -1 in the entry point. We then enter MoveNext , find that this state (which is now stored in the num local) is neither 0 nor 1, and thus execute the code that creates the temporary buffer and then branches to label IL_008b, where it makes the call to stream.ReadAsync . Note that at this point we’re still running synchronously from this call to MoveNext , and thus synchronously from Start , and thus synchronously from the entry point, meaning the developer’s code called CopyStreamToStreamAsync and it’s still synchronously executing, having not yet returned back a Task to represent the eventual completion of this method. That might be about to change…

We call Stream.ReadAsync and we get back a Task<int> from it. The read may have completed synchronously, it may have completed asynchronously but so fast that it’s now already completed, or it might not have completed yet. Regardless, we have a Task<int> that represents its eventual completion, and the compiler emits code that inspects that Task<int> to determine how to proceed: if the Task<int> has in fact already completed (doesn’t matter whether it was completed synchronously or just by the time we checked), then the code for this method can just continue running synchronously… no point in spending unnecessary overhead queueing a work item to handle the remainder of the method’s execution when we can instead just keep running here and now. But to handle the case where the Task<int> hasn’t completed, the compiler needs to emit code to hook up a continuation to the Task . It thus needs to emit code that asks the Task “are you done?” Does it talk to the Task directly to ask that?

It would be limiting if the only thing you could await in C# was a System.Threading.Tasks.Task . Similarly, it would be limiting if the C# compiler had to know about every possible type that could be await ed. Instead, C# does what it typically does in cases like this: it employs a pattern of APIs. Code can await anything that exposes that appropriate pattern, the “awaiter” pattern (just as you can foreach anything that provides the proper “enumerable” pattern). For example, we can augment the MyTask type we wrote earlier to implement the awaiter pattern:

A type can be awaited if it exposes a GetAwaiter() method, which Task does. That method needs to return something that in turn exposes several members, including an IsCompleted property, which is used to check at the moment IsCompleted is called whether the operation has already completed. And you can see that happening: at IL_008b, the Task returned from ReadAsync has GetAwaiter called on it, and then IsCompleted accessed on that struct awaiter instance. If IsCompleted returns true , then we’ll end up falling through to IL_00f0, where the code calls another member of the awaiter: GetResult() . If the operation failed, GetResult() is responsible for throwing an exception in order to propagate it out of the await in the async method; otherwise, GetResult() is responsible for returning the result of the operation, if there is one. In the case here of the ReadAsync , if that result is 0, then we break out of our read/write loop, go to the end of the method where it calls SetResult , and we’re done.

Backing up a moment, though, the really interesting part of all of this is what happens if that IsCompleted check actually returns false . If it returns true , we just keep on processing the loop, akin to in the APM pattern when CompletedSynchronously returned true and the caller of the Begin method, rather than the callback, was responsible for continuing execution. But if IsCompleted returns false, we need to suspend the execution of the async method until the await ‘d operation completes. That means returning out of MoveNext , and as this was part of Start and we’re still in the entry point method, that means returning the Task out to the caller. But before any of that can happen, we need to hook up a continuation to the Task being awaited (noting that to avoid stack dives as in the APM case, if the asynchronous operation completes after IsCompleted returns false but before we get to hook up the continuation, the continuation still needs to be invoked asynchronously from the calling thread, and thus it’ll get queued). Since we can await anything, we can’t just talk to the Task instance directly; instead, we need to go through some pattern-based method to perform this.

Does that mean there’s a method on the awaiter that will hook up the continuation? That would make sense; after all, Task itself supports continuations, has a ContinueWith method, etc… shouldn’t it be the TaskAwaiter returned from GetAwaiter that exposes the method that lets us set up a continuation? It does, in fact. The awaiter pattern requires that the awaiter implement the INotifyCompletion interface, which contains a single method void OnCompleted(Action continuation) . An awaiter can also optionally implement the ICriticalNotifyCompletion interface, which inherits INotifyCompletion and adds a void UnsafeOnCompleted(Action continuation) method. Per our previous discussion of ExecutionContext , you can guess what the difference between these two methods is: both hook up the continuation, but whereas OnCompleted should flow ExecutionContext , UnsafeOnCompleted needn’t. The need for two distinct methods here, INotifyCompletion.OnCompleted and ICriticalNotifyCompletion.UnsafeOnCompleted , is largely historical, having to do with Code Access Security, or CAS. CAS no longer exists in .NET Core, and it’s off by default in .NET Framework, having teeth only if you opt back in to the legacy partial trust feature. When partial trust is used, CAS information flows as part of ExecutionContext , and thus not flowing it is “unsafe”, hence why methods that don’t flow ExecutionContext were prefixed with “Unsafe”. Such methods were also attributed as [SecurityCritical] , and partially trusted code can’t call a [SecurityCritical] method. As a result, two variants of OnCompleted were created, with the compiler preferring to use UnsafeOnCompleted if provided, but with the OnCompleted variant always provided on its own in case an awaiter needed to support partial trust. From an async method perspective, however, the builder always flows ExecutionContext across await points, so an awaiter that also does so is unnecessary and duplicative work.

Ok, so the awaiter does expose a method to hook up the continuation. The compiler could use it directly, except for a very critical piece of the puzzle: what exactly should the continuation be? And more to the point, with what object should it be associated? Remember that the state machine struct is on the stack, and the MoveNext invocation we’re currently running in is a method call on that instance. We need to preserve the state machine so that upon resumption we have all the correct state, which means the state machine can’t just keep living on the stack; it needs to be copied to somewhere on the heap, since the stack is going to end up being used for other subsequent, unrelated work performed by this thread. And then the continuation needs to invoke the MoveNext method on that copy of the state machine on the heap.

Moreover, ExecutionContext is relevant here as well. The state machine needs to ensure that any ambient data stored in the ExecutionContext is captured at the point of suspension and then applied at the point of resumption, which means the continuation also needs to incorporate that ExecutionContext . So, just creating a delegate that points to MoveNext on the state machine is insufficient. It’s also undesirable overhead. If when we suspend we create a delegate that points to MoveNext on the state machine, each time we do so we’ll be boxing the state machine struct (even when it’s already on the heap as part of some other object) and allocating an additional delegate (the delegate’s this object reference will be to a newly boxed copy of the struct). We thus need to do a complicated dance whereby we ensure we only promote the struct from the stack to the heap the first time the method suspends execution but all other times uses the same heap object as the target of the MoveNext , and in the process ensures we’ve captured the right context, and upon resumption ensures we’re using that captured context to invoke the operation.

That’s a lot more logic than we want the compiler to emit… we instead want it encapsulated in a helper, for several reasons. First, it’s a lot of complicated code to be emitted into each user’s assembly. Second, we want to allow customization of that logic as part of implementing the builder pattern (we’ll see an example of why later when talking about pooling). And third, we want to be able to evolve and improve that logic and have existing previously-compiled binaries just get better. That’s not a hypothetical; the library code for this support was completely overhauled in .NET Core 2.1, such that the operation is much more efficient than it was on .NET Framework. We’ll start by exploring exactly how this worked on .NET Framework, and then look at what happens now in .NET Core.

You can see in the code generated by the C# compiler happens when we need to suspend:

We’re storing into the state field the state id that indicates the location we should jump to when the method resumes. We’re then persisting the awaiter itself into a field, so that it can be used to call GetResult after resumption. And then just before returning out of the MoveNext call, the very last thing we do is call <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref this) , asking the builder to hook up a continuation to the awaiter for this state machine. (Note that it calls the builder’s AwaitUnsafeOnCompleted rather than the builder’s AwaitOnCompleted because the awaiter implements ICriticalNotifyCompletion ; the state machine handles flowing ExecutionContext so we needn’t require the awaiter to as well… as mentioned earlier, doing so would just be duplicative and unnecessary overhead.)

The implementation of that AwaitUnsafeOnCompleted method is too complicated to copy here, so I’ll summarize what it does on .NET Framework:

It uses ExecutionContext.Capture() to grab the current context.

It then allocates a MoveNextRunner object to wrap both the captured context as well as the boxed state machine (which we don’t yet have if this is the first time the method suspends, so we just use null as a placeholder).

It then creates an Action delegate to a Run method on that MoveNextRunner ; this is how it’s able to get a delegate that will invoke the state machine’s MoveNext in the context of the captured ExecutionContext .

If this is the first time the method is suspending, we won’t yet have a boxed state machine, so at this point it boxes it, creating a copy on the heap by storing the instance into a local typed as the IAsyncStateMachine interface. That box is then stored into the MoveNextRunner that was allocated.

Now comes a somewhat mind-bending step. If you look back at the definition of the state machine struct, it contains the builder, public AsyncTaskMethodBuilder <>t__builder; , and if you look at the definition of the builder, it contains internal IAsyncStateMachine m_stateMachine; . The builder needs to reference the boxed state machine so that on subsequent suspensions it can see it’s already boxed the state machine and doesn’t need to do so again. But we just boxed the state machine, and that state machine contained a builder whose m_stateMachine field is null. We need to mutate that boxed state machine’s builder’s m_stateMachine to point to its parent box. To achieve that, the IAsyncStateMachine interface that the compiler-generated state machine struct implements includes a void SetStateMachine(IAsyncStateMachine stateMachine); method, and that state machine struct includes an implementation of that interface method:

So the builder boxes the state machine, and then passes that box to the box’s SetStateMachine method, which calls to the builder’s SetStateMachine method, which stores the box into the field. Whew.

Finally, we have an Action that represents the continuation, and that’s passed to the awaiter’s UnsafeOnCompleted method. In the case of a TaskAwaiter , the task will store that Action into the task’s continuation list, such that when the task completes, it’ll invoke the Action , call back through the MoveNextRunner.Run , call back through ExecutionContext.Run , and finally invoke the state machine’s MoveNext method to re-enter the state machine and continue running from where it left off.

That’s what happens on .NET Framework, and you can witness the outcome of this in a profiler, such as by running an allocation profiler to see what’s allocated on each await. Let’s take this silly program, which I’ve written just to highlight the allocation costs involved:

Allocation associated with asynchronous operations on .NET Framework

  • ExecutionContext . There’s over a million of these being allocated. Why? Because in .NET Framework, ExecutionContext is a mutable data structure. Since we want to flow the data that was present at the time an async operation was forked and we don’t want it to then see mutations performed after that fork, we need to copy the ExecutionContext . Every single forked operation requires such a copy, so with 1000 calls to SomeMethodAsync each of which is suspending/resuming 1000 times, we have a million ExecutionContext instances. Ouch.
  • Action . Similarly, every time we await something that’s not yet complete (which is the case with our million await Task.Yield() s), we end up allocating a new Action delegate to pass to that awaiter’s UnsafeOnCompleted method.
  • MoveNextRunner . Same deal; there’s a million of these, since in the outline of the steps earlier, every time we suspend, we’re allocating a new MoveNextRunner to store the Action and the ExecutionContext , in order to execute the former with the latter.
  • LogicalCallContext . Another million. These are an implementation detail of AsyncLocal<T> on .NET Framework; AsyncLocal<T> stores its data into the ExecutionContext ‘s “logical call context”, which is a fancy way of saying the general state that’s flowed with the ExecutionContext . So, if we’re making a million copies of the ExecutionContext , we’re making a million copies of the LogicalCallContext , too.
  • QueueUserWorkItemCallback . Each Task.Yield() is queueing a work item to the thread pool, resulting in a million allocations of the work item objects used to represent those million operations.
  • Task<VoidResult> . There’s a thousand of these, so at least we’re out of the “million” club. Every async Task invocation that completes asynchronously needs to allocate a new Task instance to represent the eventual completion of that call.
  • <SomeMethodAsync>d__1 . This is the box of the compiler-generated state machine struct. 1000 methods suspend, 1000 boxes occur.
  • QueueSegment / IThreadPoolWorkItem[] . There are several thousand of these, and they’re not technically related to async methods specifically, but rather to work being queued to the thread pool in general. In .NET Framework, the thread pool’s queue is a linked list of non-circular segments. These segments aren’t reused; for a segment of length N, once N work items have been enqueued into and dequeued from that segment, the segment is discarded and left up for garbage collection.

Allocation associated with asynchronous operations on .NET Core

  • ExecutionContext . In .NET Core, ExecutionContext is now immutable . The downside to that is that every change to the context, e.g. by setting a value into an AsyncLocal<T> , requires allocating a new ExecutionContext . The upside, however, is that flowing context is way, way, way more common than is changing it, and as ExecutionContext is now immutable, we no longer need to clone as part of flowing it. “Capturing” the context is literally just reading it out of a field, rather than reading it and doing a clone of its contents. So it’s not only way, way, way more common to flow than to change, it’s also way, way, way cheaper.
  • LogicalCallContext . This no longer exists in .NET Core. In .NET Core, the only thing ExecutionContext exists for is the storage for AsyncLocal<T> . Other things that had their own special place in ExecutionContext are modeled in terms of AsyncLocal<T> . For example, impersonation in .NET Framework would flow as part of the SecurityContext that’s part of ExecutionContext ; in .NET Core, impersonation flows via an AsyncLocal<SafeAccessTokenHandle> that uses a valueChangedHandler to make appropriate changes to the current thread.
  • QueueSegment / IThreadPoolWorkItem[] . In .NET Core, the ThreadPool ‘s global queue is now implemented as a ConcurrentQueue<T> , and ConcurrentQueue<T> has been rewritten to be a linked list of circular segments of non-fixed size. Once the size of a segment is large enough that the segment never fills because steady-state dequeues are able to keep up with steady-state enqueues, no additional segments need to be allocated, and the same large-enough segment is just used endlessly.

What about the rest of the allocations, like Action , MoveNextRunner , and <SomeMethodAsync>d__1 ? Understanding how the remaining allocations were removed requires diving into how this now works on .NET Core.

Let’s rewind our discussion back to when we were discussing what happens at suspension time:

The code that’s emitted here is the same regardless of which platform surface area is being targeted, so regardless of .NET Framework vs .NET Core, the generated IL for this suspension is identical. What changes, however, is the implementation of that AwaitUnsafeOnCompleted method, which on .NET Core is much different:

Things do start out the same: the method calls ExecutionContext.Capture() to get the current execution context.

Then things diverge from .NET Framework. The builder in .NET Core has just a single field on it:

After capturing the ExecutionContext , it checks whether that m_task field contains an instance of an AsyncStateMachineBox<TStateMachine> , where TStateMachine is the type of the compiler-generated state machine struct. That AsyncStateMachineBox<TStateMachine> type is the “magic.” It’s defined like this:

Rather than having a separate Task , this is the task (note its base type). Rather than boxing the state machine, the struct just lives as a strongly-typed field on this task. And rather than having a separate MoveNextRunner to store both the Action and the ExecutionContext , they’re just fields on this type, and since this is the instance that gets stored into the builder’s m_task field, we have direct access to it and don’t need to re-allocate things on every suspension. If the ExecutionContext changes, we can just overwrite the field with the new context and don’t need to allocate anything else; any Action we have still points to the right place. So, after capturing the ExecutionContext , if we already have an instance of this AsyncStateMachineBox<TStateMachine> , this isn’t the first time the method is suspending, and we can just store the newly captured ExecutionContext into it. If we don’t already have an instance of AsyncStateMachineBox<TStateMachine> , then we need to allocate it:

Note that line which the source comments as “important”. This takes the place of that complicated SetStateMachine dance in .NET Framework, such that SetStateMachine isn’t actually used at all in .NET Core. The taskField you see there is a ref to the AsyncTaskMethodBuilder ‘s m_task field. We allocate the AsyncStateMachineBox<TStateMachine> , then via taskField store that object into the builder’s m_task (this is the builder that’s in the state machine struct on the stack), and then copy that stack-based state machine (which now already contains the reference to the box) into the heap-based AsyncStateMachineBox<TStateMachine> , such that the AsyncStateMachineBox<TStateMachine> appropriately and recursively ends up referencing itself. Still mind bending, but a much more efficient mind bending.

We can then get an Action to a method on this instance that will invoke its MoveNext method that will do the appropriate ExecutionContext restoration prior to calling into the StateMachine ‘s MoveNext . And that Action can be cached into the _moveNextAction field such that any subsequent use can just reuse the same Action . That Action is then passed to the awaiter’s UnsafeOnCompleted to hook up the continuation.

That explanation explains why most of the rest of the allocations are gone: <SomeMethodAsync>d__1 doesn’t get boxed and instead just lives as a field on the task itself, and the MoveNextRunner is no longer needed as it existed only to store the Action and ExecutionContext . But, based on this explanation, we should have still seen 1000 Action allocations, one per method call, and we didn’t. Why? And what about those QueueUserWorkItemCallback objects… we’re still queueing as part of Task.Yield() , so why aren’t those showing up?

As I noted, one of the nice things about pushing off the implementation details into the core library is it can evolve the implementation over time, and we’ve already seen how it evolved from .NET Framework to .NET Core. It’s also evolved further from the initial rewrite for .NET Core, with additional optimizations that benefit from having internal access to key components in the system. In particular, the async infrastructure knows about core types like Task and TaskAwaiter . And because it knows about them and has internals access, it doesn’t have to play by the publicly-defined rules. The awaiter pattern followed by the C# language requires an awaiter to have an AwaitOnCompleted or AwaitUnsafeOnCompleted method, both of which take the continuation as an Action , and that means the infrastructure needs to be able to create an Action to represent the continuation, in order to work with arbitrary awaiters the infrastructure knows nothing about. But if the infrastructure encounters an awaiter it does know about, it’s under no obligation to take the same code path. For all of the core awaiters defined in System.Private.CoreLib, then, the infrastructure has a leaner path it can follow, one that doesn’t require an Action at all. These awaiters all know about IAsyncStateMachineBox es, and are able to treat the box object itself as the continuation. So, for example, the YieldAwaitable returned by Task.Yield is able to queue the IAsyncStateMachineBox itself directly into the ThreadPool as a work item, and the TaskAwaiter used when await ‘ing a Task is able to store the IAsyncStateMachineBox itself directly into the Task ‘s continuation list. No Action needed, no QueueUserWorkItemCallback needed.

Thus, in the very common case where an async method only awaits things from System.Private.CoreLib ( Task , Task<TResult> , ValueTask , ValueTask<TResult> , YieldAwaitable , and the ConfigureAwait variants of those), worst case is there’s only ever a single allocation of overhead associated with the entire lifecycle of the async method: if the method ever suspends, it allocates that single Task -derived type which stores all other required state, and if the method never suspends, there’s no additional allocation incurred.

We can get rid of that last allocation as well, if desired, at least in an amortized fashion. As has been shown, there’s a default builder associated with Task ( AsyncTaskMethodBuilder ), and similarly there’s a default builder associated with Task<TResult> ( AsyncTaskMethodBuilder<TResult> ) and with ValueTask and ValueTask<TResult> ( AsyncValueTaskMethodBuilder and AsyncValueTaskMethodBuilder<TResult> , respectively). For ValueTask / ValueTask<TResult> , the builders are actually fairly simple, as they themselves only handle the synchronously-and-successfully-completing case, in which case the async method completes without ever suspending and the builders can just return a ValueTask.Completed or a ValueTask<TResult> wrapping the result value. For everything else, they just delegate to AsyncTaskMethodBuilder / AsyncTaskMethodBuilder<TResult> , since the ValueTask / ValueTask<TResult> that’ll be returned just wraps a Task and it can share all of the same logic. But .NET 6 and C# 10 introduced the ability for a method to override the builder that’s used on a method-by-method basis, and introduced a couple of specialized builders for ValueTask / ValueTask<TResult> that are able to pool IValueTaskSource / IValueTaskSource<TResult> objects representing the eventual completion rather than using Task s.

We can see the impact of this in our sample. Let’s slightly tweak our SomeMethodAsync we were profiling to return ValueTask instead of Task :

That will result in this generated entry point:

Now, we add [AsyncMethodBuilder(typeof(PoolingAsyncValueTaskMethodBuilder))] to the declaration of SomeMethodAsync :

and the compiler instead outputs this:

The actual C# code gen for the entirety of the implementation, including the whole state machine (not shown), is almost identical; the only difference is the type of the builder that’s created and stored and thus used everywhere we previously saw references to the builder. And if you look at the code for PoolingAsyncValueTaskMethodBuilder , you’ll see its structure is almost identical to that of AsyncTaskMethodBuilder , including using some of the exact same shared routines for doing things like special-casing known awaiter types. The key difference is that instead of doing new AsyncStateMachineBox<TStateMachine>() when the method first suspends, it instead does StateMachineBox<TStateMachine>.RentFromCache() , and upon the async method ( SomeMethodAsync ) completing and an await on the returned ValueTask completing, the rented box is returned to the cache. That means (amortized) zero allocation:

Allocation associated with asynchronous operations on .NET Core with pooling

That cache in and of itself is a bit interesting. Object pooling can be a good idea and it can be a bad idea. The more expensive an object is to create, the more valuable it is to pool them; so, for example, it’s a lot more valuable to pool really large arrays than it is to pool really tiny arrays, because larger arrays not only require more CPU cycles and memory accesses to zero out, they put more pressure on the garbage collector to collect more often. For very small objects, though, pooling them can be a net negative. Pools are just memory allocators, as is the GC, so when you pool, you’re trading off the costs associated with one allocator for the costs associated with another, and the GC is very efficient at handling lots of tiny, short-lived objects. If you do a lot of work in an object’s constructor, avoiding that work can dwarf the costs of the allocator itself, making pooling valuable. But if you do little to no work in an object’s constructor, and you pool it, you’re betting that your allocator (your pool) is more efficient for the access patterns employed than is the GC, and that is frequently a bad bet. There are other costs involved as well, and in some cases you can end up effectively fighting against the GC’s heuristics; for example, the GC is optimized based on the premise that references from higher generation (e.g. gen2) objects to lower generation (e.g. gen0) objects are relatively rare, but pooling objects can invalidate those premises.

Now, the objects created by async methods aren’t tiny , and they can be on super hot paths, so pooling can be reasonable. But to make it as valuable as possible we also want to avoid as much overhead as possible. The pool is thus very simple, opting to make renting and returning really fast with little to no contention, even if that means it might end up allocating more than it would if it more aggressively cached more. For each state machine type, the implementation pools up to a single state machine box per thread and a single state machine box per core ; this allows it to rent and return with minimal overhead and minimal contention (no other thread can be accessing the thread-specific cache at the same time, and it’s rare for another thread to be accessing the core-specific cache at the same time). And while this might seem like a relatively small pool, it’s also quite effective at significantly reducing steady state allocation, given that the pool is only responsible for storing objects not currently in use; you could have a million async methods all in flight at any given time, and even though the pool is only able to store up to one object per thread and per core, it can still avoid dropping lots of objects, since it only needs to store an object long enough to transfer it from one operation to another, not while it’s in use by that operation.

SynchronizationContext and ConfigureAwait

We talked about SynchronizationContext previously in the context of the EAP pattern and mentioned that it would show up again. SynchronizationContext makes it possible to call reusable helpers and automatically be scheduled back whenever and to wherever the calling environment deems fit. As a result, it’s natural to expect that to “just work” with async / await , and it does. Going back to our button click handler from earlier:

with async / await we’d like to instead be able to write this as follows:

That invocation of ComputeMessage is offloaded to the thread pool, and upon the method’s completion, execution transitions back to the UI thread associated with the button, and the setting of its Text property happens on that thread.

That integration with SynchronizationContext is left up to the awaiter implementation (the code generated for the state machine knows nothing about SynchronizationContext ), as it’s the awaiter that is responsible for actually invoking or queueing the supplied continuation when the represented asynchronous operation completes. While a custom awaiter need not respect SynchronizationContext.Current , the awaiters for Task , Task<TResult> , ValueTask , and ValueTask<TResult> all do. That means that, by default, when you await a Task , a Task<TResult> , a ValueTask , a ValueTask<TResult> , or even the result of a Task.Yield() call, the awaiter by default will look up the current SynchronizationContext and then if it successfully got a non-default one, will eventually queue the continuation to that context.

We can see this if we look at the code involved in TaskAwaiter . Here’s a snippet of the relevant code from Corelib:

This is part of a method that’s determining what object to store into the Task as a continuation. It’s being passed the stateMachineBox , which, as was alluded to earlier, can be stored directly into the Task ‘s continuation list. However, this special logic might wrap that IAsyncStateMachineBox to also incorporate a scheduler if one is present. It checks to see whether there’s currently a non-default SynchronizationContext , and if there is, it creates a SynchronizationContextAwaitTaskContinuation as the actual object that’ll be stored as the continuation; that object in turn wraps the original and the captured SynchronizationContext , and knows how to invoke the former’s MoveNext in a work item queued to the latter. This is how you’re able to await as part of some event handler in a UI application and have the code after the await s completion continue on the right thread. The next interesting thing to note here is that it’s not just paying attention to a SynchronizationContext : if it couldn’t find a custom SynchronizationContext to use, it also looks to see whether the TaskScheduler type that’s used by Task s has a custom one in play that needs to be considered. As with SynchronizationContext , if there’s a non-default one of those, it’s then wrapped with the original box in a TaskSchedulerAwaitTaskContinuation that’s used as the continuation object.

But arguably the most interesting thing to notice here is the very first line of the method body: if (continueOnCapturedContext) . We only do these checks for SynchronizationContext / TaskScheduler if continueOnCapturedContext is true ; if it’s false , the implementation behaves as if both were default and ignores them. What, pray tell, sets continueOnCapturedContext to false? You’ve probably guessed it: using the ever popular ConfigureAwait(false) .

I talk about ConfigureAwait at length in ConfigureAwait FAQ , so I’d encourage you to read that for more information. Suffice it to say, the only thing ConfigureAwait(false) does as part of an await is feed its argument Boolean into this function (and others like it) as that continueOnCapturedContext value, so as to skip the checks on SynchronizationContext / TaskScheduler and behave as if neither of them existed. In the case of Task s, this then permits the Task to invoke its continuations wherever it deems fit rather than being forced to queue them to execute on some specific scheduler.

I previously mentioned one other aspect of SynchronizationContext , and I said we’d see it again: OperationStarted / OperationCompleted . Now’s the time. These rear their heads as part of the feature everyone loves to hate: async void . ConfigureAwait -aside, async void is arguably one of the most divisive features added as part of async/await . It was added for one reason and one reason only: event handlers. In a UI application, you want to be able to write code like the following:

but if all async methods had to have a return type like Task , you wouldn’t be able to do this. The Click event has a signature public event EventHandler? Click; , with EventHandler defined as public delegate void EventHandler(object? sender, EventArgs e); , and thus to provide a method that matches that signature, the method needs to be void -returning.

There are a variety of reasons async void is considered bad, why articles recommend avoiding it wherever possible, and why analyzers have sprung up to flag use of them. One of the biggest issues is with delegate inference. Consider this program:

One could easily expect this to output an elapsed time of at least 10 seconds, but if you run this you’ll instead find output like this:

Huh? Of course, based on everything we’ve discussed in this post, it should be understood what the problem is. The async lambda is actually an async void method. Async methods return to their caller the moment they hit the first suspension point. If this were an async Task method, that’s when the Task would be returned. But in the case of an async void , nothing is returned. All the Time method knows is that it invoked action(); and the delegate call returned; it has no idea that the async method is actually still “running” and will asynchronously complete later.

That’s where OperationStarted / OperationCompleted come in. Such async void methods are similar in nature to the EAP methods discussed earlier: the initiation of such methods is void , and so you need some other mechanism to be able to track all such operations in flight. The EAP implementations thus call the current SynchronizationContext ‘s OperationStarted when the operation is initiated and OperationCompleted when it completes, and async void does the same. The builder associated with async void is AsyncVoidMethodBuilder . Remember in the entry point of an async method how the compiler-generated code invokes the builder’s static Create method to get an appropriate builder instance? AsyncVoidMethodBuilder takes advantage of that in order to hook creation and invoke OperationStarted :

Similarly, when the builder is marked for completion via either SetResult or SetException , it invokes the corresponding OperationCompleted method. This is how a unit testing framework like xunit is able to have async void test methods and still employ a maximum degree of concurrency on concurrent test executions, for example in xunit’s AsyncTestSyncContext .

With that knowledge, we can now rewrite our timing sample:

Here, I’ve created a SynchronizationContext that tracks a count for pending operations, and supports blocking waiting for them all to complete. When I run that, I get output like this:

State Machine Fields

At this point, we’ve seen the generated entry point method and how everything in the MoveNext implementation works. We also glimpsed some of the fields defined on the state machine. Let’s take a closer look at those.

For the CopyStreamToStream method shown earlier:

here are the fields we ended up with:

What are each of these?

and you found the state value was 2, that almost certainly means the async method is currently suspended waiting for the task returned from C() to complete.

  • <>t__builder . This is the builder for the state machine, e.g. AsyncTaskMethodBuilder for a Task , AsyncValueTaskMethodBuilder<TResult> for a ValueTask<TResult> , AsyncVoidMethodBuilder for an async void method, or whatever builder was declared for use via [AsyncMethodBuilder(...)] on either the async return type or overridden via such an attribute on the async method itself. As previously discussed, the builder is responsible for the lifecycle of the async method, including creating the return task, eventually completing that task, and serving as an intermediary for suspension, with the code in the async method asking the builder to suspend until a specific awaiter completes.

the compiler will emit these fields onto the state machine:

Note the distinct lack of something named someArgument . But, if we change the async method to actually use the argument in any way:

it shows up:

  • <buffer>5__2; . This is the buffer “local” that got lifted to be a field so that it could survive across await points. The compiler tries reasonably hard to keep state from being lifted unnecessarily. Note that there’s another local in the source, numRead , that doesn’t have a corresponding field in the state machine. Why? Because it’s not necessary. That local is set as the result of the ReadAsync call and is then used as the input to the WriteAsync call. There’s no await in between those and across which the numRead value would need to be stored. Just as how in a synchronous method the JIT compiler could choose to store such a value entirely in a register and never actually spill it to the stack, the C# compiler can avoid lifting this local to be a field as it needn’t preserve it’s value across any awaits. In general, the C# compiler can elide lifting locals if it can prove that their value needn’t be preserved across await s.

there are five await s, but only two different types of awaiters involved: three are TaskAwaiter<int> and two are TaskAwaiter<bool> . As such, there only end up being two awaiter fields on the state machine:

Then if I change my example to instead be:

there are still only Task<int> s and Task<bool> s involved, but I’m actually using four distinct struct awaiter types, because the awaiter returned from the GetAwaiter() call on the thing returned by ConfigureAwait is a different type than that returned by Task.GetAwaiter() … this is again evident from the awaiter fields created by the compiler:

If you find yourself wanting to optimize the size associated with an async state machine, one thing you can look at is whether you can consolidate the kinds of things being awaited and thereby consolidate these awaiter fields.

There are other kinds of fields you might see defined on a state machine. Notably, you might see some fields containing the word “wrap”. Consider this silly example:

This produces a state machine with the following fields:

Nothing special so far. Now flip the order of the expressions being added:

With that, you get these fields:

We now have one more: <>7__wrap1 . Why? Because we computed the value of DateTime.Now.Second , and only after computing it, we had to await something, and the value of the first expression needs to be preserved in order to add it to the result of the second. The compiler thus needs to ensure that the temporary result from that first expression is available to add to the result of the await , which means it needs to spill the result of the expression into a temporary, which it does with this <>7__wrap1 field. If you ever find yourself hyper-optimizing async method implementations to drive down the amount of memory allocated, you can look for such fields and see if small tweaks to the source could avoid the need for spilling and thus avoid the need for such temporaries.

I hope this post has helped to illuminate exactly what’s going on under the covers when you use async / await , but thankfully you generally don’t need to know or care. There are many moving pieces here, all coming together to create an efficient solution to writing scalable asynchronous code without having to deal with callback soup. And yet at the end of the day, those pieces are actually relatively simple: a universal representation for any asynchronous operation, a language and compiler capable of rewriting normal control flow into a state machine implementation of coroutines, and patterns that bind them all together. Everything else is optimization gravy.

Happy coding!

Stephen Toub - MSFT Partner Software Engineer, .NET

task method in c#


Discussion is closed. Login to edit/delete existing comments.

Another great, ten-million-word, in-depth article from Stephen. A small book, even! Keep it coming, it’s very nice to see such in-depth articles. It’ll take a good while to consume.

Microsoft employee

I think Async/Await is harmful. I think message passing is the way to go for inter-thread communication. The Qt C++ framework offers a better solution to handle inter-thread communication using signals and slots, objects (QObjects) belong to threads that run event loops. I prefer timers running on event loops (like Qt’s QTimer) to wild timers running on threads (without event loops) over which we have very little control.

Async/Await isn’t really about inter-thread communication. Since you mentioned C++, I’ll give a little example in terms of C++. Suppose you want to show a FileOpenPicker. After you initialise it and are ready to pick the file, the next thing you want to do is call PickSingleFileAsync. If you do this, then how do you wait for the results? One option is to provide a completion handler for the IAsyncOperation that PickSingleFileAsync returns and then wait. But that blocks the UI thread, and if it takes you a little while to pick the file then the UI thread can be there for a while not processing messages. Another option is to use C++’s coroutines, which is close enough to Async/Await here. Well, one thing that you could do is:

You await on the PickSingleFileAsync and once this is complete, you notify the window that the operation is complete. So, how does this work? Well, taking information from Raymond Chen’s excellent series on how C++/WinRT implements this, the gist of it is the co_await suspends and then resumes on a separate thread in a thread pool. It waits on the PickSingleFileAsync call. Once this call completes, it suspends the coroutine again and then resumes it in the original apartment that it was running in. The coroutine then schedules some means of notifying the window that the operation had completed and then finally returned, thus completing the coroutine. The asynchronous operation itself is still synchronised by the main thread’s message queue, so you have a lot of control over it. To the C# people in this post, apologies for the C++. It seems Amine misunderstood what this type of functionality is there for, so I provided an example in terms of something they seem to be more aquainted with.

Great example. In fact, the WinMD projection for C# supports async / await against IAsyncOperation . This has been commonplace when interoperating with native code on the now defunct Windows Phone as well as UWP. Stephen didn’t specifically call this out, but this is another example of supporting being awaitable and task-like . There are even extension methods to wrap IAsyncOperation as a Task so it can interoperate with managed code that only understands tasks.

Just to share a different opinion, I think async/await is the best thing that happened to modern programming languages in the last decade or so. It isn’t just about inter-thread communication; rather, it’s about suspension / resumption of the execution flow of a certain logic. E.g., it can be extremely useful for cooperative execution on one single thread with an event loop, like in JavaScript or Python.

This is a bad take – it’s almost nonsensical. If you had actually read the first half of the article (regarding the history of async-programming in .NET), async/await wasn’t invented to facilitate inter-thread communication or parallel programming. It exists to handle suspension and resuming execution pathways, without having to register continuations inside a bunch of callbacks or event handlers. The end result is writing async code that looks very similar to regular sync code. You can use async/await inside a single-thread for instance; it’s a common misconception that just because you’re using async/await that somehow you’ve spawned multiple threads – you haven’t.

However, you CAN use async/await to accomodate sending messages, akin to something like await box.SendMessageAsync(msg) , but async/await is not, in and of itself, a message passing or multi-threading construct on its own.

Have you ever actually used async/await in c#, before posting this really strange comment?

I think labeling async/await as harmful is harmful.

Async/await is the best feature in any language in decades.

Going back to callback Hell would REALLY be harmful.

Async/await is really simple to use. Learn it, nstead of casting or aside with a phrasing that has become a meme at this point. This kind of attitude is harmful.

Love the refreshed updated insight, and while not needed it is very nice to read about something that has had a big impact on .NET and how things were written. Still remember all the old variants and complications that came with them, especially with ContinueWith readability hell.

Thanks for the in-depth history and explanation! Always fun to read your blog posts!

I do continue to maintain that ConfigureAwait(true) being the default was a monumental mistake that we still have to suffer from every day. Making some UI code in Windows Forms slightly terser at the expense of 99% of other code potentially getting subtle deadlock bugs was a poor tradeoff. I really really hope you or someone on the C# language team will at some point champion a project level switch to set the default ConfigureAwait behavior for the whole assembly. Given the millions of ConfigureAwait(false)s out there (with more written every single day), this is one of C#/.NET’s most acute problems that desperately needs a solution.

Thanks. You’re welcome.

I totally agree here. Even Swift async/await has shown that the other default would be much better: keep the code as performant as possible, and make it explicit when you want to re-enter the calling thread. It could be implemented for example via a project setting (like what done for nullables), eventually initially with the same default value as now, but with time it would change (like for nullables).

' data-src=

ConfigureAwait(true) being the default was a monumental mistake that we still have to suffer from every day It could be implemented for example via a project setting

double super agree

An excellent article – echo the point that it will take a while to consume. Any chance of a PDF with chapter links to make it easier to consume? Thanks for all the great work you do to provide informative content, Stephen.

Thanks. My pleasure. With regards to a PDF, I’ve not planned to create one; best I can suggest is print to pdf, though that won’t have chapter links.

I’ve copied the blog to a Word document and added page numbers and table of contents. I had to do a fair amount of editing to get it to look good on standard sized document pages and to correct some things that copy-and-paste messed up. This can easily be exported to a PDF and the headings in the table of contents will be clickable.

' data-src=

Small typo in the article summary – “and has transormed”, its going to be a great weekend read.

Oops! Fixed, thanks.

Many thanks Stephen, especially for the AsyncMethodBuilder section. A must-read for any C# developer (and beyond C#).

I hope in .NET one day we’ll have await ThreadPool.SwitchTo() , SynchronizationContext.SwitchTo , TaskScheduler.SwitchTo .

And maybe something like ConfigureAwait(ContinuationContext context) for precise control over await continuation context, where ContinuationContext could be:

This sounds really great, can someone explain why this was not considered from the start?

I can only speculate: perhaps, to avoid extra complexity and cognitive load in what was already revolutionary new and complex concept, when it was introduced about a decade ago.

At the same time, I personally find the default behavior of ConfigureAwait a bit too nuanced, and it can be slightly different between Task and ValueTask .

On a side note, I use some homebrew implementations of SwitchTo all the time and find them very useful and mnemonic.

You’re very welcome! Glad you found it helpful.

There is a huge mistake: it is spelled gobbledygook , not gobbledydook !

In all seriousness, the article is awesome, can’t overstate this.

There is a huge mistake: it is spelled gobbledygook, not gobbledydook!

Oops 🙂 Thanks, fixed.

In all seriousness, the article is awesome, can’t overstate this.

Excellent, very glad you liked it.

Took me about 3 hours to read (and understand) This is really great. Thank you Stephen.

Very glad it was helpful. Thanks.

Great in-depth article!

Thank you, Stephen!

Thanks! You’re welcome.


blog post image

Andrew Lock | .NET Escapades Andrew Lock

  • .NET Core 6
  • Source Code Dive

A deep-dive into the new Task.WaitAsync() API in .NET 6

In this post I look at how the new Task.WaitAsync() API is implemented in .NET 6, looking at the internal types used to implement it.

Adding a timeout or cancellation support to await Task

In my previous post , I showed how you could "cancel" an await Task call for a Task that didn't directly support cancellation by using the new WaitAsync() API in .NET 6.

I used WaitAsync() in that post to improve the code that waits for the IHostApplicationLifetime.ApplicationStarted event to fire. The final code I settled on is shown below:

In this post, I look at how the .NET 6 API Task.WaitAsync() is actually implemented.

Diving into the Task.WaitAsync implementation

For the rest of the post I'm going to walk through the implementation behind the API. There's not anything very surprising there, but I haven't looked much at the code behind Task and its kin, so it was interesting to see some of the details.

Task.WaitAsync() was introduced in this PR by Stephen Toub .

We'll start with the Task.WaitAsync methods :

These three methods all ultimately delegate to a different, private , WaitAsync overload (shown shortly) that takes a timeout in milliseconds. This timeout is calculated and validated in the ValidateTimeout method , shown below, which asserts that the timeout is in the allowed range, and converts it to a uint of milliseconds.

Now we come to the WaitAsync method that all the public APIs delegate too . I've annotated the method below:

Most of this method is checking whether we can take a fast-path and avoid the extra work involved in creating a CancellationPromise<T> , but if not, then we need to dive into it. Before we do, it's worth addressing the VoidTaskResult generic parameter used with the returned CancellationPromise<T> .

VoidTaskResult is an internal nested type of Task , which is used a little like the unit type in functional programming ; it indicates that you can ignore the T .

Using VoidTaskResult means more of the implementation of Task and Task<T> can be shared. In this case, the CancellationPromise<T> implementation is the same in both the Task.WaitAsync() implementation (shown above), and the generic versions of those methods exposed by Task<TR> . .

So with that out the way, let's look at the implementation of CancellationPromise<T> to see how the magic happens.

Under the hood of CancellationPromise<T>

There's quite a few types involved in CancellationPromise that you probably won't be familiar with unless you regularly browse the .NET source code, so we'll take this one slowly.

First of all, we have the type signature for the nested type CancellationPromise<T> :

There's a few things to note in the signature alone:

  • private protected —this modifier means that the CancellationPromise<T> type can only be accessed from classes that derive from Task , and are in the same assembly . Which means you can't use it directly in your user code.
  • Task<TResult> —the CancellationPromise<T> derives from Task<TResult> . For the most part it's a "normal" task, that can be cancelled, completed, or faulted just like any other Task .
  • ITaskCompletionAction —this is an internal interface that essentially allows you to register a lightweight action to take when a Task completes. This is similar to a standard continuation created with ContinueWith , except it is lower overhead . Again, this is internal , so you can't use it in your types. We'll look in more depth at this shortly.

We've looked at the signature, now let's look at it's private fields. The descriptions for these in the source cover it pretty well I think:

So we have 3 fields:

  • The original Task on which we called WaitAsync()
  • The cancellation token registration received when we registered with the CancellationToken . If the default cancellation token was used, this will be a "dummy" default instance.
  • The timer used to implement the timeout behaviour (if required).

Note that the _timer field is of type TimerQueueTimer . This is another internal implementation, this time it is part of the overall Timer implementation . We're going deep enough as it is in this post, so I'll only touch on how this is used briefly below. For now it's enough to know that it behaves similarly to a regular System.Threading.Timer .

So, the CancellationPromise<T> is a class that derives from Task<T> , maintains a reference to the original Task , a CancellationTokenRegistration , and a TimerQueueTimer .

The CancellationPromise constructor

Lets look at the constructor now. We'll take this in 4 bite-size chunks. First off, the arguments passed in from Task.WaitAsync() have some debug assertions applied, and then the original Task is stored in _task . Finally, the CancellationPromise<T> instance is registered as a completion action for the source Task (we'll come back to what this means shortly).

Next we have the timeout configuration. This creates a TimerQueueTimer and passes in a callback to be executed after millisecondsDelay (and does not execute periodically). A static lambda is used to avoid capturing state, which instead is passed as the second argument to the TimerQueueTimer . The callback tries to mark the CancellationPromise<T> as faulted by setting a TimeoutException() (remember that CancellationPromise<T> itself is a Task ), and then does some cleanup we'll see later.

Note also that flowExecutionContext is false , which avoids capturing and restoring the execution context for performance reasons. For more about execution context, see this post by Stephen Toub .

After configuring the timeout, the constructor configures the CancellationToken support. This similarly registers a callback to fire when the provided CancellationToken is cancelled. Note that again this uses UnsafeRegister() (instead of the normal Register() ) to avoid flowing the execution context into the callback.

Finally, the constructor does some house keeping. This accounts for the situation where the source Task completes while the constructor is executing , before the timeout and cancellation have been registered. Or if the timeout fires before the cancellation is registered. Without the following block, you could end up with leaking resources not being cleaned up

That's all the code in the constructor. Once constructed, the CancellationPromise<T> is returned from the WaitAsync() method as a Task (or a Task<T> ), and can be awaited just as any other Task . In the next section we'll see what happens when the source Task completes.

Implementing ITaskCompletionAction

In the constructor of CancellationPromise<T> we registered a completion action with the source Task (the one we called WaitAsync() on):

The object passed to AddCompletionAction() must implement ITaskCompletionAction (as CancellationPromise<T> does) ITaskCompletionAction interface is simple, consisting of a single method (which is invoked when the source Task completes) and a single property:

CancellationPromise<T> implements this method as shown below. It sets InvokeMayRunArbitraryCode to true (as all non-specialised scenarios do) and implements the Invoke() method, receiving the completed source Task as an argument.

The implementation essentially "copies" the status of the completed source Task into the CancellationPromise<T> task:

  • If the source Task was cancelled, it calls TrySetCancelled , re-using the exception dispatch information to "hide" the details of CancellationPromise<T>
  • If the source task was faulted, it calls TrySetException()
  • If the task completed, it calls TrySetResult

Note that whatever the status of the source Task , the TrySet* method may fail, if cancellation was requested or the timeout expired in the mean time. In those cases the bool variable is set to false , and we can skip calling Cleanup() (as the successful path will call it instead).

Now you've seen all three callbacks for the 3 possible outcomes of WaitAsync() . In each case, whether the task, timeout, or cancellation completes first, we have some cleanup to do.

Cleaning up

One of the things you can forget when working with CancellationToken s and timers, is to make sure you clean up after yourself. CancellationPromise<T> makes sure to do this by always calling Cleanup() . This does three things:

  • Dispose the CancellationTokenRegistration returned from CancellationToken.UnsafeRegister()
  • Close the ThreadQueueTimer (if it exists), which cleans up the underlying resources
  • Removes the callback from the source Task , so the ITaskCompletionAction.Invoke() method on CancellationPromise<T> won't be called.

Each of these methods is idempotent and thread-safe, so it's safe to call the Cleanup() method from multiple callbacks, which might happen if something fires when we're still running the CancellationPromise<T> constructor, for example.

One point to bear in mind is that even if a timeout occurs, or the cancellation token fires and the CancellationPromise<T> completes, the source Task will continue to execute in the background. The caller who executed source.WaitAsync() won't ever see the output of result of the Task , but if that Task has side effects, they will still occur.

And that's it! It took a while to go through it, but there's not actually much code involved in the implementation of WaitAsync() , and it's somewhat comparable to the "naive" approach you might have used in previous versions of .NET, but using some of .NET's internal types for performance reasons. I hope it was interesting!

In this post I took an in-depth look at the new Task.WaitAsync() method in .NET 6, exploring how it is implemented using internal types of the BCL. I showed that the Task returned from WaitAsync() is actually a CancellationPromise<T> instance, which derives from Task<T> , but which supports cancellation and timeouts directly. Finally, I walked through the implementation of CancellationPromise<T> , showing how it wraps the source Task .

Popular Tags

task method in c#

Stay up to the date with the latest posts!

Code Maze

  • Blazor WASM 🔥
  • ASP.NET Core Series
  • GraphQL ASP.NET Core
  • ASP.NET Core MVC Series
  • Testing ASP.NET Core Applications
  • EF Core Series
  • HttpClient with ASP.NET Core
  • Azure with ASP.NET Core
  • ASP.NET Core Identity Series
  • IdentityServer4, OAuth, OIDC Series
  • Angular with ASP.NET Core Identity
  • Blazor WebAssembly
  • .NET Collections
  • SOLID Principles in C#
  • ASP.NET Core Web API Best Practices
  • Top REST API Best Practices
  • Angular Development Best Practices
  • 10 Things You Should Avoid in Your ASP.NET Core Controllers
  • C# Back to Basics
  • C# Intermediate
  • Design Patterns in C#
  • Sorting Algorithms in C#
  • Docker Series
  • Angular Series
  • Angular Material Series
  • HTTP Series
  • .NET/C# Author
  • .NET/C# Editor
  • Our Editors
  • Leave Us a Review
  • Code Maze Reviews

Select Page

How To Read the Request Body in ASP.NET Core Web API

Posted by Code Maze | Updated Date Jan 31, 2024 | 0

How To Read the Request Body in ASP.NET Core Web API

Want to build great APIs? Or become even better at it? Check our Ultimate ASP.NET Core Web API program and learn how to create a full production-ready ASP.NET Core API using only the latest .NET technologies. Bonus materials (Security book, Docker book, and other bonus files) are included in the Premium package!

ASP.NET Core offers a versatile request-response pipeline, allowing seamless customization and intervention. Managing incoming requests to read the request body in an ASP.NET Core Web API application is a common and crucial task. There are multiple methods and techniques available, each tailored to specific requirements. This comprehensive guide delves into various approaches, providing detailed insights into their usage and benefits.

So let’s dive into the details.

Reading Request Body in a Controller

Parsing the request body within our ASP.NET Core Web API controllers gives us the flexibility to manage incoming data. Whether we choose to read the body as a string or employ model binding for structured data, these methods provide us with the tools to seamlessly process incoming requests.

Become a patron at Patreon!

Reading as String

We can read the request body as a string in a controller action. ASP.NET Core offers the Request property within controller actions, granting access to the Request.Body . However, this body has a Stream type and is unreadable to us. To handle this stream data as text, we need to convert it into a string format:

Here, we access the request body by invoking an extension method called ReadAsStringAsync() . We’ll delve into the details shortly. Once we obtain the response from the extension method, we straightforwardly return it from our action for testing purposes.

ReadAsStringAsync() Extension Method

Rather than directly converting the stream data to a string within the controller action, we can implement an extension method. This approach allows us to use it in various scenarios. As we progress through this article, there will be a recurrent need to convert stream data to a string. Leveraging extension methods provides an efficient solution to implement once and employ multiple times.

Let’s implement the extension method:

We establish an extension method named ReadAsStringAsync() . This method enhances the functionalities of the Stream type, enabling us to seamlessly convert stream data into a string format. To achieve this, we create an instance of the StreamReader class. Utilizing the StreamReader class facilitates the reading of characters from a stream, be it a file or a network stream.

For a deeper understanding of StreamReader and StreamWriter, we recommend consulting our article C# Back to Basics – Files, StreamWriter and StreamReader .

Despite the numerous constructor options available, in our context, we provide two parameters for the StreamReader . First is the stream data representing our request body and the second parameter, leaveOpen . This parameter ensures that the stream remains open even after the StreamReader completes its operations and is disposed.

In the subsequent step, we invoke the ReadToEndAsync() method of the reader object, which yields the string representation of the stream data. 

Using EnableBuffering for Multiple Reads

What occurs if we attempt to read the request body once more or multiple times in the above scenario? 

Let’s check this:

Now, let’s make an API call to this action method and inspect the response through Swagger:

Read request as a string

Here, we send the string “CodeMaze” as the request payload. When we check the response, we see that the first read attempt is successful, but the second one is not what we expect.

In situations requiring multiple reads of the request body, enabling buffering is essential. To achieve this, we can utilize the EnableBuffering() method of the Request object:

Here, we invoke the Request.EnableBuffering() method, allowing the reading of the request body multiple times. Following that, we invoke the ReadAsStringAsync() method. To ensure the stream remains open for subsequent reads, we set the leaveOpen parameter to true. Just before the second attempt, we reset the position of the request body to zero.

With the latest modifications in place, let’s test the API with the same parameter:

First: CodeMaze, Second: CodeMaze

After invoking the EnableBuffering() method, we effectively retrieve the request body for all subsequent attempts.

Model Binding

ASP.NET Core allows automatic deserialization of the request body to a predefined model class. This approach simplifies the handling of structured data. To make use of this intriguing feature, we utilize the [FromBody] attribute within our action, preceding the model parameter:

Here, ASP.NET Core automatically maps the incoming request body to the PersonItemDto class. Then, within the action body, we have the ability to access and utilize the properties of the model.

We are free to design the PersonItemDto class and its properties to match our application’s needs precisely:

Let’s explore one more scenario before bidding farewell to this topic. Let’s say we want to send and read an extra salary parameter to our action method. To solve this, we can try to add additional salary parameter with the FromBody attribute:

Is this approach correct?

There are no compilation errors when we build the code. However, during the runtime, when our application attempts to execute, an InvalidOperationException occurs at the line app.MapControllers() in the Program.cs file:

In simple terms, this exception notifies us that the FromBody attribute is permitted only once in action method parameters. Hence, it is advisable to gather all parameters within a single request model parameter.

Using a Custom Middleware

Middleware provides a powerful way to intercept requests and responses globally within our ASP.NET Core application. We can create custom middleware to read the request body. We will not go into details of Middleware in this article, but if you need a refresher, visit our ASP.NET Core Middleware – Creating Flexible Application Flows article.

Let’s see how to create a custom middleware to read the request body:

Here, we create a custom middleware named RequestBodyMiddleware .

Let’s now implement the Invoke() method to access and read the request body:

We examine the request path to identify specific routes initially. After that, we seamlessly translate the request body stream into a raw string representation by calling our extension method ReadAsStringAsync() . Following this, numerous options are available for leveraging the request body. Examples include logging the result, appending it to the request header, or checking the request length, among other potential uses. After concluding our processing of the request body, we direct the request to the subsequent middleware by invoking the _next() delegate.

To utilize this middleware, it’s essential to incorporate it in the Program.cs file:

Avoid Reading Large Request Bodies

Handling large request bodies in a web application demands caution due to potential memory issues, performance degradation, and resource exhaustion. The time-consuming processing of large bodies may lead to slower response times and reduced overall application throughput. Concerns include the risk of denial-of-service (DoS) attacks through intentionally large bodies and increased network overhead. To address these challenges, some best practices include setting size constraints, incorporating streaming mechanisms, and deploying asynchronous processing to improve scalability.

Let’s revisit our custom middleware to inspect the analysis of the request payload:

Here, we establish a condition to verify if the request length exceeds a predefined maximum length value. If the condition is met, we halt the request pipeline and return a status code of Payload Too Large (413) .

Using Action Filters to Read the Request Body

Action filters in ASP.NET Core provide a way to run logic before or after controller action methods execute. This functionality empowers us to intercept incoming requests and establish a stopping point to inspect the request body.

To learn more about the action filters, please check out Implementing Action Filters in ASP.NET Core .

Let’s create a custom action filter:

In this scenario, we create a custom action filter called ReadRequestBodyActionFilter that implements the IActionFilter interface. Within this filter, we define the OnActionExecuting() method to handle our specific logic. Then, we examine the request path and extract the request body using the ReadAsStringAsync() extension method. Lastly, we append the request body to the request header using the key ReadRequestBodyActionFilter .

To utilize this action filter, it’s essential to register it in the Program.cs file:

Using a Custom Attribute to Read the Request Body

When it comes to intercepting incoming requests, custom attributes can be used in combination with action filters to modify the behavior of the request processing pipeline.

Let’s create a custom attribute to inspect and read the request body:

Here, we create a custom attribute called ReadRequestBodyAttribute that implements the IAsyncActionFilter interface. Then, we implement the OnActionExecutionAsync() method to read the request body. Once again, we create a StreamReader object and we access the request body as a string by calling the ReadToEndAsync() method. Finally, we append the request body to the request header using the key ReadRequestBodyAttribute .

We can now proceed to utilize our custom attribute. To do so, we need to apply it to our controller action:

Here, we apply our custom attribute ReadRequestBody to the controller action ReadFromAttribute() . Within the action, we inspect the request header ReadRequestBodyAttribute and assign its content to the action response.

In this article, we have explored various methods to answer how to read the request body in an ASP.NET Core Web API application. Retrieving the request body by reading in the controller actions offers simplicity and contro l for basic scenarios. This approach is the most commonly used in .Net Core Web API projects.

Custom middleware can be used when we want extensive global interception abilities . This can allow us to log the request body inside only one place of the Web API endpoints. Also, custom middleware allows easy manipulation of both requests and responses in one place.

Action filters are a good candidate to encapsulate logic, enhancing the clarity and focus of controller actions. By using action filters, we abstract the intricacies of handling the request body . This allows the controller action to maintain a cleaner and more dedicated focus on its primary purpose.

In the realm of reading request bodies, we leverage custom attributes for specialized, declarative handling . This empowers us with precise control over the processing of request bodies . The suitability of each approach depends on the specific requirements of our application, spanning from fundamental control to the demand for encapsulation and specialization.


Join our 20k+ community of experts and learn about our Top 16 Web API Best Practices .

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Async return types (C#)

  • 3 contributors

Async methods can have the following return types:

  • Task , for an async method that performs an operation but returns no value.
  • Task<TResult> , for an async method that returns a value.
  • void , for an event handler.
  • Any type that has an accessible GetAwaiter method. The object returned by the GetAwaiter method must implement the System.Runtime.CompilerServices.ICriticalNotifyCompletion interface.
  • IAsyncEnumerable<T> , for an async method that returns an async stream .

For more information about async methods, see Asynchronous programming with async and await (C#) .

Several other types also exist that are specific to Windows workloads:

  • DispatcherOperation , for async operations limited to Windows.
  • IAsyncAction , for async actions in UWP that don't return a value.
  • IAsyncActionWithProgress<TProgress> , for async actions in UWP that report progress but don't return a value.
  • IAsyncOperation<TResult> , for async operations in UWP that return a value.
  • IAsyncOperationWithProgress<TResult,TProgress> , for async operations in UWP that report progress and return a value.

Task return type

Async methods that don't contain a return statement or that contain a return statement that doesn't return an operand usually have a return type of Task . Such methods return void if they run synchronously. If you use a Task return type for an async method, a calling method can use an await operator to suspend the caller's completion until the called async method has finished.

In the following example, the WaitAndApologizeAsync method doesn't contain a return statement, so the method returns a Task object. Returning a Task enables WaitAndApologizeAsync to be awaited. The Task type doesn't include a Result property because it has no return value.

WaitAndApologizeAsync is awaited by using an await statement instead of an await expression, similar to the calling statement for a synchronous void-returning method. The application of an await operator in this case doesn't produce a value. When the right operand of an await is a Task<TResult> , the await expression produces a result of T . When the right operand of an await is a Task , the await and its operand are a statement.

You can separate the call to WaitAndApologizeAsync from the application of an await operator, as the following code shows. However, remember that a Task doesn't have a Result property, and that no value is produced when an await operator is applied to a Task .

The following code separates calling the WaitAndApologizeAsync method from awaiting the task that the method returns.

Task<TResult> return type

The Task<TResult> return type is used for an async method that contains a return statement in which the operand is TResult .

In the following example, the GetLeisureHoursAsync method contains a return statement that returns an integer. The method declaration must specify a return type of Task<int> . The FromResult async method is a placeholder for an operation that returns a DayOfWeek .

When GetLeisureHoursAsync is called from within an await expression in the ShowTodaysInfo method, the await expression retrieves the integer value (the value of leisureHours ) that's stored in the task returned by the GetLeisureHours method. For more information about await expressions, see await .

You can better understand how await retrieves the result from a Task<T> by separating the call to GetLeisureHoursAsync from the application of await , as the following code shows. A call to method GetLeisureHoursAsync that isn't immediately awaited returns a Task<int> , as you would expect from the declaration of the method. The task is assigned to the getLeisureHoursTask variable in the example. Because getLeisureHoursTask is a Task<TResult> , it contains a Result property of type TResult . In this case, TResult represents an integer type. When await is applied to getLeisureHoursTask , the await expression evaluates to the contents of the Result property of getLeisureHoursTask . The value is assigned to the ret variable.

The Result property is a blocking property. If you try to access it before its task is finished, the thread that's currently active is blocked until the task completes and the value is available. In most cases, you should access the value by using await instead of accessing the property directly.

The previous example retrieved the value of the Result property to block the main thread so that the Main method could print the message to the console before the application ended.

Void return type

You use the void return type in asynchronous event handlers, which require a void return type. For methods other than event handlers that don't return a value, you should return a Task instead, because an async method that returns void can't be awaited. Any caller of such a method must continue to completion without waiting for the called async method to finish. The caller must be independent of any values or exceptions that the async method generates.

The caller of a void-returning async method can't catch exceptions thrown from the method. Such unhandled exceptions are likely to cause your application to fail. If a method that returns a Task or Task<TResult> throws an exception, the exception is stored in the returned task. The exception is rethrown when the task is awaited. Make sure that any async method that can produce an exception has a return type of Task or Task<TResult> and that calls to the method are awaited.

The following example shows the behavior of an async event handler. In the example code, an async event handler must let the main thread know when it finishes. Then the main thread can wait for an async event handler to complete before exiting the program.

Generalized async return types and ValueTask<TResult>

An async method can return any type that has an accessible GetAwaiter method that returns an instance of an awaiter type . In addition, the type returned from the GetAwaiter method must have the System.Runtime.CompilerServices.AsyncMethodBuilderAttribute attribute. You can learn more in the article on Attributes read by the compiler or the C# spec for the Task type builder pattern .

This feature is the complement to awaitable expressions , which describes the requirements for the operand of await . Generalized async return types enable the compiler to generate async methods that return different types. Generalized async return types enabled performance improvements in the .NET libraries. Because Task and Task<TResult> are reference types, memory allocation in performance-critical paths, particularly when allocations occur in tight loops, can adversely affect performance. Support for generalized return types means that you can return a lightweight value type instead of a reference type to avoid additional memory allocations.

.NET provides the System.Threading.Tasks.ValueTask<TResult> structure as a lightweight implementation of a generalized task-returning value. The following example uses the ValueTask<TResult> structure to retrieve the value of two dice rolls.

Writing a generalized async return type is an advanced scenario, and is targeted for use in specialized environments. Consider using the Task , Task<T> , and ValueTask<T> types instead, which cover most scenarios for asynchronous code.

In C# 10 and later, you can apply the AsyncMethodBuilder attribute to an async method (instead of the async return type declaration) to override the builder for that type. Typically you'd apply this attribute to use a different builder provided in the .NET runtime.

Async streams with IAsyncEnumerable<T>

An async method may return an async stream , represented by IAsyncEnumerable<T> . An async stream provides a way to enumerate items read from a stream when elements are generated in chunks with repeated asynchronous calls. The following example shows an async method that generates an async stream:

The preceding example reads lines from a string asynchronously. Once each line is read, the code enumerates each word in the string. Callers would enumerate each word using the await foreach statement. The method awaits when it needs to asynchronously read the next line from the source string.

  • Process asynchronous tasks as they complete
  • Asynchronous programming with async and await (C#)

Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: .

Submit and view feedback for

Additional resources


  1. How to Create Synchronous Method using Task in C#

    task method in c#

  2. How to Execute Multiple Tasks in C# with Examples

    task method in c#

  3. Task & Async Await C#

    task method in c#

  4. Task in C# with Examples

    task method in c#

  5. How to Execute Multiple Tasks in C# with Examples

    task method in c#

  6. How to Execute Multiple Tasks in C# with Examples

    task method in c#


  1. C#

  2. C# Asynchronous Programming : 22 How to Cancel a Task in C# using Cancellation Token in Telugu

  3. How does C# allocate memory for a List #shorts

  4. Return Multiple Values from a C# Method


  6. C# Multithreading Task/Async/Await Part 2: C# Code


  1. Task in C# with Examples

    Task in C# with Examples. In this article, I am going to discuss Task in C# with Examples. Please read our previous article where we discussed how to implement Asynchronous Programming using Async and Await Operators in C# with Examples. Task in C#. In C#, when we have an asynchronous method, in general, we want to return one of the following ...

  2. The Task Asynchronous Programming (TAP) model with async and await'

    An await expression in an async method doesn't block the current thread while the awaited task is running. Instead, the expression signs up the rest of the method as a continuation and returns control to the caller of the async method. The async and await keywords don't cause additional threads to be created.

  3. Task Class (System.Threading.Tasks)

    Creates a continuation that executes according to the specified continuation options and returns a value. Continue With<TResult> (Func<Task,TResult>, Task Scheduler) Creates a continuation that executes asynchronously when the target Task completes and returns a value. The continuation uses a specified scheduler.

  4. Task-based asynchronous programming

    The Task and Task<TResult> classes provide several methods to help you compose multiple tasks. These methods implement common patterns and make better use of the asynchronous language features that are provided by C#, Visual Basic, and F#. This section describes the WhenAll, WhenAny, Delay, and FromResult methods. Task.WhenAll

  5. Asynchronous programming with async, await, Task in C#

    The Task class represents an asynchronous operation and Task<TResult> generic class represents an operation that can return a value. In the above example, we used await Task.Delay (4000) that started async operation that sleeps for 4 seconds and await holds a thread until 4 seconds. The following demonstrates the async method that returns a value.

  6. C# Task

    The Run() method returns a Task< TResult > object that represents the result of the asynchronous operation. In our example, the GetRandomNumber() returns an integer, therefore, the Task.Run () returns the Task<int> object: Task< int > task = Task.Run(() => GetRandomNumber()); Code language: C# (cs) To get the returned number of the ...

  7. Task in C# Asynchronous Programming

    A task in C# is used to implement Task-based Asynchronous Programming and was introduced with the .NET Framework 4. The Task object is typically executed asynchronously on a thread pool thread rather than synchronously on the application's main thread. A task scheduler is responsible for starting the Task and also responsible for managing it.

  8. C# Task

    The ReadLineAsync method returns a Task<String> that represents an asynchronous read operation. The result in a task contains the next line from the stream, or is null if all the characters have been read. $ dotnet run First line is: sky C# Task.WaitAll. The Task.WaitAll method waits for all of the provided tasks to complete execution.

  9. How Do Tasks Work In C#? Async/Background Threads

    You can use await inside this task to wait for async operations, which in turn return a task themselves. You can start running a Task using Task.Run(Action action). This will queue up the Task on the thread pool, which will run in the background on a different thread. The thread pool takes a queue of tasks, and assigns them to CPU threads for ...

  10. Do you have to put Task.Run in a method to make it async?

    The most common awaitable types are Task and Task<T>. So, if we reformulate your question to "how can I run an operation on a background thread in a way that it's awaitable", the answer is to use Task.Run: private Task<int> DoWorkAsync() // No async because the method does not need await. {. return Task.Run(() =>.

  11. Understanding async / await in C#

    Understand C# Task, async and await C# Task. Task class is an asynchronous task wrapper. Thread.Sleep(1000) can stop a thread running for 1 second. ... =>TaskTest()) task done won't show up at all until I append a Console.ReadLine(); after the Run method. Internally, Task class represent a thread state In a State Machine. Every state in state ...

  12. Implementing the Task-based Asynchronous Pattern

    In this article. You can implement the Task-based Asynchronous Pattern (TAP) in three ways: by using the C# and Visual Basic compilers in Visual Studio, manually, or through a combination of the compiler and manual methods. The following sections discuss each method in detail.

  13. Tasks in C#

    Different ways of creating Tasks in C#: There are various ways available in C#.Net 4.0 to create a Task object. Please find some of the different ways as follows. 1) Task creation using Factory method: You can use Task. Factory method to creates a Task instance and invoke it in a single line of code, as follows.

  14. c#

    Exceptions raised from async Task methods are placed on the returned task; exceptions raised from non-async methods are propagated directly. await will by default will resume the async method in the same "context". This "context" is SynchronizationContext.Current unless it is null, in which case it is TaskScheduler.Current.

  15. How Async/Await Really Works in C#

    Axum provided an async keyword that could be put onto a method, just like async can now in C#. Task wasn't yet ubiquitous, so inside of async methods, the Axum compiler heuristically matched synchronous method calls to their APM counterparts, e.g. if it saw you calling stream.Read, it would find and utilize the corresponding stream.BeginRead ...

  16. A deep-dive into the new Task.WaitAsync() API in .NET 6

    In this post I took an in-depth look at the new Task.WaitAsync () method in .NET 6, exploring how it is implemented using internal types of the BCL. I showed that the Task returned from WaitAsync () is actually a CancellationPromise<T> instance, which derives from Task<T>, but which supports cancellation and timeouts directly. Finally, I walked ...

  17. Task.Run Method (System.Threading.Tasks)

    The Run (Action, CancellationToken) method is a simpler alternative to the TaskFactory.StartNew (Action, CancellationToken) method. It creates a task with the following default values: Its CreationOptions property value is TaskCreationOptions.DenyChildAttach. It uses the default task scheduler.

  18. Task.CompletedTask, Task.FromResult and Return in C#

    The return keyword is used in a similar way in both synchronous and asynchronous methods to provide a value as the result of the method. Whether the method is synchronous or asynchronous, the basic purpose of the return keyword remains the same - to send a value back from the method to the calling code.. Conclusion. In this article, we learned the usage of Task.CompletedTask,Task.FromResult ...

  19. Asynchronous programming scenarios

    The core of async programming is the Task and Task<T> objects, which model asynchronous operations. They are supported by the async and await keywords. The model is fairly simple in most cases: For I/O-bound code, you await an operation that returns a Task or Task<T> inside of an async method. For CPU-bound code, you await an operation that is ...

  20. c# Can a "task method" also be an "async" method?

    3. Can a "task method" also be an "async" method? Yes it can be, by simply changing the method signature to public async static Task<Task<String>> LongTaskAAsync() since that is, what it will return. If you use the async keyword, the runtime will wrap the type you return into a task, to enable asynchronousness.

  21. How To Read the Request Body in ASP.NET Core Web API

    Then, we implement the OnActionExecutionAsync() method to read the request body. Once again, we create a StreamReader object and we access the request body as a string by calling the ReadToEndAsync() method. Finally, we append the request body to the request header using the key ReadRequestBodyAttribute.

  22. Async return types

    In this article. Async methods can have the following return types: Task, for an async method that performs an operation but returns no value.; Task<TResult>, for an async method that returns a value. void, for an event handler.; Any type that has an accessible GetAwaiter method. The object returned by the GetAwaiter method must implement the System.Runtime.CompilerServices ...

  23. Proper way to implement methods that return Task<T>

    19. For simplicity let's imagine we have a method that should return an object while doing some heavy operation. There're two ways to implement: return Task.Run(() =>. // some heavy synchronous stuff. return new object(); And. return await Task.Run(() =>. // some heavy stuff.

  24. MultiThreading or Parallel Running of API Calls in ASP.NET MVC C#

    If you are going to get into your API a search request and then it is up to you call dozens of other APIs and aggregate all of these informations and some of your calls or the aggregation itself can take a lot of time, you have to decouple the incoming request and returning the data. Normally in such cases your api takes the incoming request ...