gRPC API Performance Improvement Through Protobuf FieldMask In .NET
In gRPC, one service can directly call a method of another service on a different machine as if it were a local object, making it easier for you to create distributed applications and services.
So how is gRPC’s performance better than other API models? HTTP/2 is one of the big reasons on which gRPC relies. In traditional HTTP protocols (till HTTP/1.1), it is not possible to send multiple requests or get multiple responses together in a single connection. A new connection will need to be created for each of them. This kind of request/response multiplexing is made possible in HTTP/2 with the introduction of a new HTTP/2 layer called binary framing.
The binary layer encapsulates and encodes the data. In this layer, the HTTP request/response gets broken down into frames. Using this mechanism, it’s possible to have data from multiple requests in a single connection.
You might have also come across cases where HTTP headers are bigger than the payload. This is solved in HTTP/2 using a strategy called HPack. Everything in HTTP/2 is encoded before it’s sent, including the headers. But compressing headers is not the most important part. HTTP/2 maps the header on both the client and the server-side. From that, HTTP/2 is able to know if the header contains the same value and only sends the header value if it is different from the previous header.
HTTP/2 doesn’t modify the application semantics of HTTP in any way. All the core concepts, such as HTTP methods, status codes, URIs, and header fields, remain in place. HTTP/2 just modifies the way data is framed and transported between the client and server.
Below are the main benefits of gRPC:
- Modern, high-performance, lightweight RPC framework.
- Contract-first API development, using Protocol Buffers by default, allowing for language agnostic implementations.
- Tooling available for many languages to generate strongly-typed servers and clients.
- Supports client, server, and bi-directional streaming calls.
- Reduced network usage with Protobuf binary serialization.
These benefits make gRPC ideal for:
- Lightweight microservices where efficiency is critical.
- Polyglot systems where multiple languages are required for development.
- Point-to-point real-time services that need to handle streaming requests or responses.
To achieve better performance, gRPC and HTTP/2 alone are not enough. What if your payload is always bigger? No matter what protocol you use, the data to be transmitted over the Internet will take time according to no. of bytes to be transferred. So, how to overcome this?
Well, the client requesting the data can decide how much data it wants. Technically, all the fields in the objects or few? Let’s consider a scenario where we are running an online shopping platform, of course not as big as Amazon. Pun intended. Our backend architecture is designed in form of microservices where we have a basket, catalogue, ordering, discount and brand services.
In case the user wants to see all discounts on one page, we can make a call from the browser to Discount.API service. It will return a list of Coupon objects.
Now think of another scenario where the user adds 10 items in the basket. We also need to apply discounts to the individual items. If we give frontend the responsibility to call discount API and calculate the discounted item price, then it needs to make 10 API calls. Bad code smell.
We can delegate this task to basket service wherein as soon as the item is added in the basket, we can tell basket service to communicate with discount (Discount.Grpc to be precise) service and apply the discount to each item and then give back the whole JSON response to the user. This will reduce calls between frontend and backend.
The only difference now is that instead of frontend, basket service makes 10 RPC calls to Discount.Grpc service. Ultimately, it is one service talking to another. Now, unlike frontend requesting all fields, basket service just needs id and amount fields from the object and omit the name, description and sponsors because it just needs to reduce the discount amount from the item price.
Fields like name and description are not a big deal but think of sponsors. To get the list of sponsors, discount service needs to communicate with brand service. Inter-service communication is fast, but it comes at a cost. It delays the response by few milliseconds. If there is a way where the client can tell what exact fields it needs then we can avoid the extra calls and heavy computation.
With GraphQL it comes out of the box through the use of field selectors. In JSON:API standard a similar technique is known as sparse fieldsets. But these are not supported in gRPC APIs. How do we achieve a similar feature in gRPC?
Protocol Buffers, or widely known as protobufs are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data — like XML, but smaller, faster and simpler. gRPC, by default, uses protobuf as its IDL (interface definition language) and data serialization protocol.
Below is our discount.proto file defining the structure of procedures, requests and responses. From this same file, client and server code is generated on the compilation. Language-specific generators are provided by Google. You need to use that generated code and don’t need to worry about behind the scenes transformations.
Don’t worry if the message format is a bit confusing. This configuration file contains the context information. The numbers are just identifiers of the fields. By using this configuration, we can send messages in encoded format.
Let’s try to understand by seeing AddressModel.
If it were JSON, then the address object might have looked like below:
But in protobuf, the same message would get converted into:
In the case of “1211Some street”, 1 stands for field identifier, 2 stands for the data type (which is a string), 11 is the length of the text. I admit this is a bit more difficult to read than JSON; however, this will take very little space compared to JSON data. If you read the encoded message carefully, you won’t see the field names. Field names are identified on server-side based on identifiers.
But nowhere we have mentioned what fields we want and whatnot. This is possible using FieldMask. It allows us to send field names. Just update GetDiscountRequest by adding field_mask as shown below.
google.protobuff.FieldMask is nothing but a protobuf message. It contains a single field named paths, which is used to specify the fields to be returned by a read operation or modified by an update operation.
Now, let’s see how basket service will consume this. Below is the controller which will get called when a user adds items in the basket. It will iterate through all the items in the basket to calculate the final price.
As basket service is interested in only id and amount fields, it will add them in the Paths array. This array would be received by Discount.Grpc service in the request object.
Now let’s look at the Discount.Grpc service. Read the steps written in the below code snippet.
Run the application and check the response received by basket service. For that, we will send a request from Swagger.
In the below code snippet, highlighted in yellow colour, you can see the coupon model object. It just has two fields.
You can argue what’s the big deal in this. Imagine for a moment that you do not have FieldMask property. What you can do is have extra fields in the request whose purpose would be to declare the intention of the consumer.
This approach requires includeXXX fields for each response field and doesn’t work well for nested fields. It also increases the maintenance and complexity of the request. Not only this, but you also need to update your Discount.Grpc service to check for these fields, which is of course not an ideal solution.
This is the reason why FieldMask is introduced.
In this article, we saw how to use FieldMask for read operation, but you can also use it for update/create operations. Just send only those fields which need to be updated and you can skip the rest of them.
You can find the complete code over here.