Remote procedure calls are fundamentally different from local function calls
REST has emerged as a leader among technologies for making API requests over a network, but it was not the first. The first attempt at integrating applications in the 1970s involved what is generally referred to as a Remote Procedure Call (RPC).
At its core, the RPC model tries to make a request to a remote network service appear as the same as calling a function or method within the same process in your programming language. Called Location Transparency, this abstraction helps developers write applications ignoring the details of where the services are actually hosted.
While being able to code in a homogenous model can speed up development and result in higher productivity, there are a few fundamental flaws in assuming that network requests are the same as local function calls.
Network requests are unpredictable.
While we take internet connectivity for granted, a network is inherently unreliable. A request or response may be lost, the remote machine might be slow or unreliable, or the data may arrive corrupted.
Network problems are a norm, so recovery mechanisms need to be in place. While TCP and HTTP protocols do the heavy lifting in ensuring reliable data transfer, applications still need to consider the side-effects of lost data and handle issues.
Network requests may timeout.
A local function will either return a result, throw an exception, or never returns if the process crashes or goes into an infinite loop. A network call may succeed in all of these but never return due to a timeout. In the absence of a response, a client has no way of knowing whether the request made it to the remote service.
If the client wants to retry a failed request, we have to assume that the original request might have gone through successfully. The remote service has to be idempotent and deduplicate the message to ensure that the action is not performed multiple times.
Network requests take a variable time to complete.
Understandably, a network request is slower than a local function call in memory. But the response time of network requests can vary wildly as well. What took a few milliseconds earlier can take many seconds the next time if the network is congested or the remote service is overloaded.
So applications have to manage these speed differences and still appear to work seamlessly to the end-user.
Network requests need data to be encoded.
Passing data in memory, typically in the form of references (pointers) to objects, is efficient. Data over a network needs to be converted to a sequence of bytes. Some type of encoding is necessary to verify that the data was indeed received fully and was not tampered with.
Encoding is easy with primitives like numbers or strings but can be problematic with larger objects. Consequently, the application has to take on the burden of translating object data into byte form.
Moreover, if the client and remote service are not written in the same language, the RPC framework has to translate the data in the encoded data from one language to another. This is a problem because programming languages vary wildly in how to they handle data types, especially numbers.
RPC received a lot of attention in the beginning with multiple implementations like Java's EJB (Enterprise JavaBeans) and RMI (Remote Method Invocation), Microsoft's DCOM (Distributed Component Object Model), and CORBA (Common Object Request Broker Architecture).
But the underlying problems with RPC mean that there is no use trying to make a service look like a local object in your program. It is much better to explicitly outline and handle the remote requests, and that is what new RPC frameworks like Avro RPC and gRPC do.