'ASP.Net Core API response takes too much time
I have a SQL database table with 9000 rows and 97 columns. Its primary key has 2 columns: Color
and Name
. You can see the simplified table to image better:
I have an ASP.NET Core API listening at URL api/color/{colorName}
, it reads the table to get color information. Currently I have 3 colors and about 3000 rows each.
It takes too much time. It reads table in 2383ms and maps to DTO in 14ms. And after that I immediately return the DTO to consumer but somehow the API takes 4135.422ms. I don't understand why. I guess I should take 2407.863ms but it not. It takes almost 2 times more.
You can see my code and logs below. Do you have an idea how can I improve the response time?
I use Entity Framework Core 3.1, AutoMapper and ASP.NET Core 3.1.
Service:
public async Task<IEnumerable<ColorDTO>> GetColors(string requestedColor)
{
var watch = System.Diagnostics.Stopwatch.StartNew();
var colors = await _dbContext.Colors.Where(color => color.color == requestedColor).ToListAsync();
watch.Stop();
_logger.LogError("Color of:{requestedColor} Reading takes:{elapsedMs}", requestedColor, watch.ElapsedMilliseconds);
var watch2 = System.Diagnostics.Stopwatch.StartNew();
var colorDtos = _mapper.Map<IEnumerable<ColorDTO>>(colors);
watch2.Stop();
_logger.LogError("Color of:{requestedColor} Mapping takes:{elapsedMs}", requestedColor, watch2.ElapsedMilliseconds);
return colorDtos;
}
Controller:
public async Task<ActionResult<IEnumerable<ColorDTO>>> GetBlocksOfPanel(string requestedColor)
{
return Ok(await _colorService.GetColors(requestedColor));
}
And the logs:
2020-04-27 15:21:54.8793||0HLVAKLTJO59T:00000003|MyProject.Api.Services.IColorService|INF|Color of Purple Reading takes:2383ms
2020-04-27 15:21:54.8994||0HLVAKLTJO59T:00000003|MyProject.Api.Services.IColorService|INF|Color of Purple Mapping takes:14ms
2020-04-27 15:21:54.9032||0HLVAKLTJO59T:00000003|Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker|INF|Executed action method MyProject.Api.Web.Controllers.ColorsController.GetColors (MyProject.Api.Web), returned result Microsoft.AspNetCore.Mvc.OkObjectResult in 2407.863ms.
2020-04-27 15:21:54.9081||0HLVAKLTJO59T:00000003|Microsoft.AspNetCore.Mvc.Infrastructure.ObjectResultExecutor|INF|Executing ObjectResult, writing value of type 'System.Collections.Generic.List`1[[MyProject.Api.Contracts.Dtos.ColorDTO, MyProject.Api.Contracts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]'.
2020-04-27 15:21:56.4895||0HLVAKLTJO59T:00000003|Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker|INF|Executed action MyProject.Api.Web.Controllers.ColorsController.GetColors (MyProject.Api.Web) in 4003.8022ms
2020-04-27 15:21:56.4927||0HLVAKLTJO59T:00000003|Microsoft.AspNetCore.Routing.EndpointMiddleware|INF|Executed endpoint 'MyProject.Api.Web.Controllers.ColorsController.GetColors (MyProject.Api.Web)'
2020-04-27 15:21:56.4972||0HLVAKLTJO59T:00000003|Microsoft.AspNetCore.Hosting.Diagnostics|INF|Request finished in 4135.422ms 200 application/json; charset=utf-8
Solution 1:[1]
As @ejwill mentioned in his comment, you need to consider latency in the entire operation. Fetching from the database and mapping to DTOs is only part of what is happening during the round trip of the request and response to your API.
You can probably reduce the the query time against your database table through some optimizations there. You don't indicate what database you're using, but a composite key based on two string/varchar values may not necessarily be the most performant, and the use of indexes on values you're filtering on may also help -- there are tradeoffs there depending on whether you're optimizing for write or for read. That being said, 97 columns is not trivial either way. Do you need to query and return all 97 columns over the API? Is pagination an option?
If you must return all the data for all 97 columns at once and you're querying the API frequently, you can also consider the use of an in-memory cache, especially if the table is not changing often; instead of making the roundtrip to the database every time, you keep a copy of the data in memory so it can be returned much more quickly. You can look at an implementation of an in-memory cache that supports a generational model to keep serving up data while new versions are fetched.
Solution 2:[2]
Serialization of result could take huge time.
First thing is serialization itself: if you return 3k records, it will take significant time to serialize it to JSON or XML. Consider moving to more compact binary formats.
Second thing is memory and GC. If amount of serialized data exceeds 85,000 bytes, memory for this data will be allocated on LOH in one chunk. This could take time. You might consider inspecting your LOH and look for response data stored there. Possible workaround could be responding with chunks of data and utilization of kind of paging with offset
and position
.
You can easily check that serialization causes performance trouble: leave call to DB as it is, but return to the client only 100-200 rows instead of the whole result, or return less object fields (for example, only 3). Time should be reduced.
Solution 3:[3]
your problem relates to SQL side. you should check the indexing of your columns and run your query in the execution plan state to find your bottleneck. Also, for increasing performance, I suggest that rewrite code in the async state.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Adam |
Solution 2 | cassandrad |
Solution 3 | Alireza Balavand |