Web Services
RPC and more for the Internet-wide services
Paul Krzyzanowski
September 25, 2023
Goal: Provide ways of communicating and exchanging data between disparate software systems over the internet, ensuring interoperability and modularity.
The early web (1990s)
The early web was based on the concept of hypertext, allowing researchers to link documents together. HTML (Hypertext Markup Language) was introduced as a means to format and structure this content. Initially, websites were static. They were collections of HTML documents hosted on servers, and every user received the same content. There was no dynamic content generation or personalization. Web browsers, such as the early versions of Netscape Navigator and Internet Explorer, were developed to interpret HTML and present formatted content to users. They acted as clients requesting static pages from servers.
Evolving towards interactivity
As more users, particularly businesses and personal users rather than academic and research entities, began to populate and use the web, there was a growing need for interactivity as well as more expressive formatting (web pages were no longer hyperlinked documents). Websites that could respond to user input, provide fresh content without manual updates, or support online transactions became necessary.
With the web’s growth, the ability to store, retrieve, and display large amounts of data became increasingly important. Think of commerce websites, such as Amazon and eBay. This required the integration of databases with web servers. Thus, web pages started being generated based on data stored in databases, leading to truly dynamic and data-driven websites.
The Common Gateway Interface (CGI) was one of the early methods used to provide dynamic web content. CGI scripts, often written in Perl, enabled servers to capture requests and generate dynamic content, providing a rudimentary level of interactivity. Servers could now generate web pages on the fly based on user input or other parameters. To support this, web servers implemented a mechanism to act as a proxy, forwarding HTML requests to a separate program and sending the responses from that program back to the client. This idea served as the basis for future application servers that would support web services.
On the client side, support for Java applets was introduced into the web browser. Java applets were applications written in Java and embedded within web pages and enabled interactive content on web pages. They’ve been made obsolete with the emergence of modern web technologies like HTML5, CSS3, and JavaScript.
Several components in the web browser evolved to create richer interactive experiences for users:
Cascading Style Sheets (CSS) were created to describe the presentation of various HTML elements on a page, such as the width, height, color, background color, alignment, spacing, etc. These are stored in a separate file and are often shared among multiple pages. Pages can now use HTML markups without specifying the formatting and other presentation details (e.g., how the background color for a table or the spacing between paragraphs) since those are defined in one or more CSS files.
JavaScript began as a small scripting language built into a browser so that code embedded in a web page could interact with user input, draw animations, and modify the look of the page.
The early web contained all formatting directives embedded within the HTML tags that were intermixed with the content. The Document Object Model (DOM) emerged as a data representation for the structure of a web page. All aspects of the page are organized into objects. This allows scripting languages in the browser, such as JavaScript, to change the content and style of the document as well as interact with the user.
The XMLHttpRequest object, added to JavaScript (initially by Microsoft) was a game-changer. It allowed web browsers to fetch and send data to a server asynchronously, in the background, without having to reload the entire page. This capability paved the way for AJAX, which stands for Asynchronous JavaScript and XML. It ushered in a new era of highly interactive websites where page content can change dynamically based on updates from a server. An example is Google Maps, where you can scroll through a never-ending map without ever having to reload a page. To use it, a new XMLHttpRequest object is created and a request set up, specifying the HTTP method (GET, POST, PUT, …) and the URL. A callback function is defined to handle responses when they arrive.
WebAssembly (wasm) is a low-level, binary instruction format that serves as a virtual machine for executing code nearly as fast as running native machine code. It acts as a compilation target for languages like Java, C, C++, Rust, and more, allowing these languages to run within web browsers for performance-critical web applications where JavaScript is not sufficient, such as graphics editors, games, or simulators.
The early web was user-facing. It was all about presenting formatted content to the user.
Limitations of web access
While web pages started becoming more dynamic, the content was largely unstructured and monolithic. Data and presentation were tightly coupled. Extracting data from different websites in a standardized or consistent manner was not possible.
As businesses realized the potential of the internet beyond just presenting content to users, there arose a need for a more structured way to expose functionalities and data over the web. This gave birth to web services.
Web services allowed for the direct transfer of data rather than presenting data intermixed with presentation directives. This meant that the same data could be consumed by various clients, such as web browsers, mobile apps, or other third-party applications, allowing for a more modular and interconnected web – one that isn’t designed solely for human interaction. They enable machine-to-machine (M2M) communication.
Why not RPC?
The obvious question is, why couldn’t web services simply use RPC since various RPC frameworks were already developed and deployed? There were several reasons:
Because most RPC frameworks offered the convenience of “you don’t need to pick a port”, RPC solutions typically ran services over an arbitrary range of ports where the operating system selected an unused port1 and the service registered that port number with an RPC name server. This led to an administrative nightmare where an administrator could not set a firewall rule to allow access to a specific port that’s offering a service and would instead have to permit access to a wide range of ports. This, in turn, would enable access to every other RPC service that’s running or any other services that decided to bind to one of those ports.
Even though some RPC solutions were designed to support multiple languages and operating systems, most RPC systems were actually deployed with a limited set of environments in mind. Sun RPC did not generate code for IBM’s flavor of UNIX, DCE RPC and Sun RPC only generated C code. Microsoft’s services were difficult to use outside of the Microsoft ecosystem. Some cross-platform solutions, such as COBRA, were sold by multiple vendors but even those were not always interoperable across different vendors' products.
It turns out that some services need more than RPC’s request-response style of interaction. For example, we might want to implement a publish-subscribe (pub/sub) interface, where a client requests to be informed when a certain event takes place. The server will, at future times, send messages informing of these events.
RPC systems were designed with local area networks in mind. This meant that clients expected a low latency to the server. The high latency to remote, loaded servers could lead to excessive retries (which generates even more server load) as well as clients giving up and returning a failure because a response was too slow to arrive.
Finally, state management was somewhat of an issue. Although RPC does not require that servers store client state, a distributed object model makes this the norm (local variables within the object as well as information about the existence of the object). Large-scale deployments could get bogged down by the memory use of objects created to service requests that have not yet been garbage collected.
Principles of web services
Web services are a set of protocols by which services can be published, discovered, and used over the Internet in a technology-neutral form. This means that they are designed to be language and architecture independent. Applications will typically invoke multiple remote services across different systems, sometimes offered by different organizations.
Web services were developed with several general principles in mind:
Use HTTP over TCP/IP for transport. This allows us to use existing web infrastructure: web servers, firewalls, and load balancers. HTTP enables authentication and secure transport via HTTPS (the HTTP protocol with transport layer security)
Be platform agnostic: Web services aimed to be platform and language neutral. Clients should not care how a service is implemented or what it is running on.
Standard: Services would operate over internet protocols and use a well-defined schema for messaging.
Use text-based payloads, called documents, marshalling all data into formats such as XML or JSON. This ensures that the marshalling format does not favor a specific processor architecture or programming language. It also would avoid content-inspecting firewalls causing problems since they’d see what looks like legitimate web content.
Tolerate high latency. Servers are likely not to be on a local area network and may be slow to respond either due to their own load or due to network latency. This means that, where possible, programmers should strive for asynchronous interactions: dispatch a request and see if you can do something else useful before you get the response.
Tolerate a large number of clients. Applications that interact with web services tend to be more loosely-coupled than those that use distributed objects (remote procedure calls). Services may be run by different organizations and servers cannot count on clients behaving properly or there being only a small number of clients accessing a service. When possible, the ideal
design is a stateless one where each client request contains all the necessary data and the server does not have to store any state between requests. This simplifies recovery and load balancing among multiple servers. Documents, the unit of message exchange in web services, tend to be self-describing. That is, they will identify and itemize all the parameters (explicit typing) and also describe any additional state needed for the interaction.
Functionally, you can do anything with web services that you can with distributed objects (RPC). The differences are usually philosophical. Web services focus on document exchange and are often designed with high latency in mind. Document design is central to web services. Distributed objects tend to look at the world in a way where interfaces are the key parts of the design. The data structures (“documents”) passed in these interfaces just package the data for use by them.
Service-Oriented Architecture (SOA) and Microservices
Web services gave us the ability to have a collection of services hosted across different sites and available programmatically over the Internet. Thinking about the best ways of interacting with these services led to the concepts of Service-Oriented Architecture and Microservices. Both of these architectures build an application as the integration of network-accessible services where each service has a well-defined interface.
Service-Oriented Architecture, or SOA, is a software architectural pattern focused on the distribution and operation of loosely-coupled services. SOA involves constructing software components or services that provide specific functionalities. These can be reused across various applications, and they communicate to fulfill broader business processes.
Key Characteristics of SOA
Unassociated: No service depends on another service; they are all mutually independent. This ensures that services are designed to operate independently, communicating through well-defined interfaces. -
Loose Coupling: Neither service needs to know about the internal structure of other services.
Interoperability: Allows services built on varying technologies to communicate using standard protocols, predominantly SOAP.
Discoverability: Facilitates the finding and understanding of services using directories or registries.
Reusability: Encourages building services for multiple contexts, promoting efficiency.
To maintain loose coupling, as well as loose coupling between clients and servers, web services are generally designed to be stateless. This means they forgo object management and distributed garbage collection and the client is responsible for transmitting any associated state with each request.
Microservices
A microservices architecture represents a modern approach to constructing software systems, emphasizing modularity and scalability. It builds on top of the principles of SOA, but microservices decompose a software application into smaller, independently operable services, each tailored for a specific function.
Key Characteristics of Microservices
Fine-Grained: Focuses on doing one specific thing efficiently.
Independence: Enables each service to be developed, tested, deployed, and scaled autonomously.
Decentralized Data Management: Each service usually manages its data, promoting loose coupling.
Statelessness: Prefers not to maintain client-specific session information between requests.
Microservices often opt for lightweight protocols, predominantly REST over HTTP/HTTPS, and JSON as a favored data format for messaging. A microservices approach promotes using the best-suited programming languages, frameworks, or technologies for each microservice.
Comparing SOA and Microservices
Granularity: SOA typically delivers coarser-grained services, whereas microservices are more fine-grained.
Communication: While SOA, being older, would often lean toward the use of SOAP, microservices tend to favor RESTful APIs and lightweight messaging.
Data Management: SOA services might share a common database, contrasting with the database-per-service model in microservices. They may share a common service for service discovery as well.
Deployment: Microservices put a strong emphasis on independent deployment of each service, while SOA may not offer this granularity.
Both SOA and Microservices are foundational architectures in the domain of distributed software systems. While both advocate for modularity and reusability, microservices further push the boundaries, emphasizing granularity and a decentralized approach. The choice between the two often hinges on project goals, existing infrastructure, and specific needs.
First Wave of Web Services (Late 1990s - Early 2000s)
XML-RPC
XML-RPC stands for “XML Remote Procedure Call.” It was one of the first protocols to facilitate communication between software running on disparate devices using HTTP as the transport and XML for encoding the calls. XML-RPC was created in 1998 as a simple protocol that marshals all requests and responses into XML messages. It is essentially just a marshalling protocol and the standard does not define an IDL or stub function generator.
There are a lot of libraries to support XML RPC and some languages support it more transparently than others. XML-RPC is just a messaging format with simple data types. Nothing in the spec has support for remote objects, object references, or garbage collection.
The protocol was designed when the dominant vision of web services was that of providing RPC-style interactions (request-response). This turned out not to always be the most useful interaction model. For example, one might want to implement the subscribe-publish model we mentioned earlier, where a client would subscribe to receive published notifications of specific events from a server. I
The two advantages of XML-RPC were that it was simple and language agnostic. It allowed for straightforward calls and responses, using XML to format these messages. As XML-RPC merely used HTTP and XML, it could be implemented in virtually any programming language.
XML-RPC was quite rudimentary and lacked many of the features required by larger, enterprise-level applications. As a result, there was a need for a more comprehensive protocol.
SOAP
During this early period, SOAP (Simple Object Access Protocol) emerged as an XML-based protocol that allowed programs on disparate systems to communicate over HTTP. For a while, SOAP was essentially synonymous with web services. SOAP elevated the concept of web services, defining a robust protocol with a richly-detailed set of standards for client-server messaging.
XML-RPC took an evolutionary fork and, with the support of companies such as Microsoft and IBM, evolved into SOAP, the Simple Object Access Protocol. The acronym has since been deprecated since SOAP turned out to be neither simple nor confined to accessing objects. XML RPC became a simple subset of SOAP. In addition to remote procedure calls, SOAP added support for general-purpose messaging (sending, receiving, and asynchronous notification of messages). It also added advanced error handling, metadata headers, and extensible capabilities.
Beyond the broad goals of web services outlined earlier, SOAP championed:
The use of XML messaging. SOAP invocations are always XML messages that are usually sent via an HTTP protocol. However, HTTP transport is not a requirement; it is possible to send a SOAP message via email and SMTP (Simple Mail Transport Protocol).
Discoverable services: With the use of WSDL (Web Services Description Language), clients could discover the methods provided by the web service. SOAP services can be described via the Web Services Description Language (WSDL) is an XML-based document that describes a specific web service. It serves as an interface definition and defines all the names, operations, parameters, destination, and format of requests.
A simple example of how a SOAP message for a remote procedure to add two numbers may look something like this:
<?xml version="1.0"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SOAP-ENV:Body>
<m:Add xmlns:m="http://example.org/ArithmeticService">
<m:Number1>123</m:Number1>
<m:Number2>456</m:Number2>
</m:Add>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
In this SOAP message, we define the necessary XML namespaces at the start. Inside the <SOAP-ENV:Body>
, there’s a method called Add that’s associated with the namespace "http://example.org/ArithmeticService"
.
The two numbers to be added, 5 and 7, are enclosed within <m:Number1>
and <m:Number2>
tags, respectively.
WSDL is somewhat complex for human consumption. Typically, one creates an interface definition in a language such as Java and then uses a tool to translate that definition into a WSDL document. That WSDL document can then be fed to another tool (often by another programmer) to generate code that can be used to invoke remote functions or send remote messages.
As SOAP-based web services became popular, services to support SOAP were created for various platforms. One of the more popular ones for Java, and supported by the Oracle (the owner of Java), is JAX-WS (Java API for XML Web Services). JAX-WS stands for Java API for XML Web Services. It is a Java standard for creating and consuming web services, specifically SOAP. JAX-WS is part of the Java EE (Enterprise Edition) platform, but it can also be used standalone in Java SE (Standard Edition).
The goal of JAX-WS is to invoke Java web services using Java RMI using either message-oriented or RPC-oriented interactions. Unlike traditional Java RMI, interoperability is important since the remote side (client or server) may not necessarily use Java. JAX-WS uses SOAP and WSDL to achieve platform independence. There are also third-party frameworks and libraries, like Apache CXF or Apache Axis2, which offer additional features or capabilities.
While SOAP was widely used, its extensive feature set coupled with a complex XML messaging structure contributed to its verbosity, leading to inflated message sizes. The elaborate nature of SOAP made it daunting for simpler applications and often led to steep learning curves for developers.
Responses could be labeled as cacheable or non-cacheable, optimizing performance by reducing the need for some client-server interactions.
From XML to JSON and beyond
RPC frameworks traditionally defined their own formats for marshalling data. ONC RPC used XDR, DCE RPC and Microsoft RPC used NDR, Java RMI leveraged Java’s object serialization methods. These were efficient formats but closely aligned with their respective RPC implementations.
When web services were first developed, interoperability was important: services should not rely on any specific language or architecture. The obvious marshalling format to use was XML. This was, roughly, what HTML used for describing the content of web pages (HTML was not particularly strict about proper XML structure). Its use was adopted for XML-RPC and SOAP and it remains heavily in use.
However, XML turned out to be a rather text-heavy protocol that was complex to parse. A lightweight alternative that gained much popularity is JSON (JavaScript Object Notation). It came from JavaScript’s object literal notation and has the same basic data types as JavaScript but is not at all dependent on JavaScript. Despite the JavaScript in the name, it was designed to be language-independent and easy to parse. It was introduced as the “fat-free alternative to XML.”
It has become the data representation format that is the most widely used for web services today and dominates web services, particularly those built on REST (see the next section). Even more efficient is Google Protocol Buffers. This is a binary marshalling protocol and hence is not always suited for web services over HTTP but is phenomenally efficient for messaging and for storing serialized data (e.g., saving objects in a file system).
RESTful Web Services (Mid 2000s - Present)
As XML-based web services grew in complexity, there was a shift to simpler, resource-oriented web services, commonly referred to as RESTful services. Introduced around 2000 but gaining significant traction by the late 2000s, REST (Representational State Transfer) marked a paradigm shift. Rather than being a strict protocol, REST presented an architectural style, emphasizing scalability, simplicity and a strong coupling with HTTP.
With the rise of the REST (Representational State Transfer) architecture, there was a shift towards a more scalable and simplified way of building web services using standard HTTP methods. REST promoted concepts like statelessness and resource-oriented design. During this time, the web also saw a preference shift towards JSON for data interchange due to its lightweight nature and seamless integration with JavaScript. In parallel, the OData (Open Data Protocol) emerged, establishing a set of best practices for creating and consuming RESTful APIs.
REST (REpresentational State Transfer) is a departure from the approach of SOAP. Instead of using HTTP simply as a conduit for sending and receiving XML messages where everything you need is contained in the body, REST incorporates itself into the HTTP protocol. In particular, the URI (Uniform Resource Identifier, usually a URL) incorporates the request and list of parameters. The HTTP protocol itself defines the core operations (the verbs):
POST
: create somethingPUT
: create or replace somethingPATCH
: update a part of somethingGET
: read somethingDELETE
: delete something
The body of the message will contain the document, which will contain data for the operation and not, as in the case of SOAP, a structure that also identifies the operations to be performed on the document. In REST, documents often use JSON instead of XML but they can really take on any structure (JSON, XML, comma-separated text, …)
Using the simple addition example from the SOAP message, one way of representing this is to think of “add” as a virtual resource that performs addition on the parameters given to it. In this case, the request will have no body. We will simply issue an HTTP GET
request to get the “contents” of this resource via:
GET https://calculator.example.com/add?n1=123&n2=456
Some advantages of REST include:
- Simplicity:
- REST is based on standard HTTP methods (like GET, POST, PUT, DELETE) and status codes.
- Performance:
- JSON is typically less verbose than XML, leading to faster parsing and smaller message sizes.
- Statelessness:
- Each request from any client contains all the information needed to service the request.
- Cacheability:
- Responses could be labeled as cacheable or non-cacheable, optimizing performance by reducing the need for some client-server interactions.
Note, however, that SOAP defines a complete messaging protocol, detailing the definition and representation of data types and the structure of messages. It allows anyone with a WSDL file to be able to create a valid interface to a service. REST does not do this: it is a philosophy and approach to communications but does not include a way for a service to describe its interfaces and the messages that each of them requires.
Back to RPCs (Mid 2010s - Present)
Older RPC Mechanisms such as ONC RPC and DCE RPC continue to be used today. ONC RPC is primarily associated with the Network File System (NFS), which relies heavily on RPC mechanisms to function. NFS is a widely-used protocol for file sharing in UNIX-like systems (Linux and various flavors of BSD), which means that any system that utilizes NFS also relies on ONC RPC for its operations. DCE RPC, compatible with Microsoft’s Object RPC, is the foundation for numerous Microsoft services, especially in the context of Windows networks. Technologies like Active Directory, file sharing over SMB, and some legacy applications are built on top of DCE RPC. Therefore, its continued support is crucial for the proper functioning and backward compatibility of Microsoft-based infrastructures.
Web services, primarily those based on SOAP (Simple Object Access Protocol) and REST (Representational State Transfer), are widely used because of the widespread adoption of web platforms and the ability to plug services into these frameworks. They are platform-independent, which makes them suitable for public-facing applications where heterogeneous systems might interact. However, this versatility comes at a cost. The overhead associated with HTTP, especially when combined with verbose formats like XML (commonly used in SOAP), can make these services considerably less efficient than binary protocols. They might not be the best choice for high-performance, internal, or time-sensitive applications where bandwidth and processing overhead are concerns.
Several RPC frameworks have been developed to provide platform and language interoperability along with high efficiency along with features such as asynchronous calls and callbacks. Two popular frameworks are gRPC, created by Google, and Apache Thrift.
gRPC (Late 2010s - Present)
Developed by Google, gRPC is an open-source RPC framework that uses HTTP/2 for transport and Protocol Buffers (protobuf) as its interface description language. It is designed to be low-latency and can support multiple programming languages. Thanks to its use of HTTP/2, it can handle bidirectional streaming and multiplexing, enabling features like asynchronous calls, callbacks, and flow control. Key features of gRPC include:
- High performance:
- gRPC uses Protocol Buffers (protobuf) as its Interface Definition Language and its wire protocol, which is more efficient than XML or JSON.
- Streaming:
- gRPC supports streaming requests and responses, allowing for more complex use cases. With streaming, a single client request may return a stream of messages in response from the server. Conversely, a client may send a stream of messages to the server and receive a single response.
- Deadlines/Timeouts:
- gRPC calls can be set with deadlines or timeouts, making it more suitable for certain real-time systems.
- Multiplexing:
- A single TCP connection can handle multiple gRPC calls concurrently.
- Language Agnostic:
- gRPC supports multiple programming languages, ensuring wide adoptability.
About HTTP/2 (2015- Present)
gRPC uses HTTP/2 for transport. HTTP provided support for things like name resolution, load balancing, and firewall traversal. HTTP/2 does the same, providing a set of core transport services to support efficient long-lived connections. HTTP/2 is the second major version of the HTTP network protocol, used by the World Wide Web. It was standardized in 2015 as RFC 7540brings several key improvements over the earlier version, HTTP/1.1:
- Binary Protocol:
- Unlike HTTP/1.1, which is a text-based protocol, HTTP/2 uses binary communication. This makes it more efficient to parse and allows better optimization of network resources.
- Multiplexing:
- Multiple requests can be sent in parallel over a single TCP connection, removing the need for multiple connections between clients and servers. This contrasts with HTTP/1.1 where each resource fetch required a new connection (or a new request/response pair on a kept-alive connection), leading to head-of-line blocking, where fetching one object delays others.
- Header Compression:
- HTTP/2 introduces HPACK compression, which reduces the overhead of sending repetitive header fields, which are common in HTTP traffic. This results in significant performance improvements on web pages with many small requests.
- Stream Prioritization:
- Clients can specify the priority of a resource, allowing more important resources to be fetched and used more quickly, thus improving the perceived performance of web pages.
- Server Push:
- Servers can send resources to the client’s cache proactively before the client explicitly asks for them. This can speed up the loading of web pages by sending data the server expects the client will need in the near future.
- Enhanced Flow Control:
- Improved mechanisms for managing how data is sent on a connection, ensuring that high-priority streams don’t get blocked by low-priority streams.
Reduced Latency: With the combination of multiplexing, header compression, binary encoding, and other features, HTTP/2 can reduce the amount of time it takes for a web page to start rendering.
Apache Thrift:
Originally developed at Facebook in 2007 by an former Google employee, Apache Thrift is another open-source cross-language RPC framework. It supports a variety of transport protocols and serialization formats. This flexibility makes it versatile, catering to the specific needs of a project, whether it’s lightweight communication or full-fledged service development. Like gRPC, it offers support for a multitude of programming languages. A Thrift compiler takes an interface definition language and generates language-specific stubs that perform marshalling, unmarshalling, and communication.
Both gRPC and Thrift accomplish similar goals and have many similarities. Some differences are:
- Thrift provides a choice of formats for data serialization, including a compact binary protocol, JSON, or XML. gRPC uses Protocol Buffers for data serialization.
- Thrift provides an abstraction for the transport layer, allowing the actual transport to be decoupled from the implementation of Thrift. Supported protocols include TCP, HTTP, and file reads/writes. gRPC uses HTTP/2 for transport.
- gRPC supports bidirectional data streaming. A big advantage of gRPC over Thrift is its ability to support streaming large amounts of data.
- Thrift has been around since 2007. gRPC was created in 2016.
Some high-profile companies that use Apache Thrift include Facebook, Twitter, Evernote, and Microsoft. Companies that use gRPC include Google, Netflix, and Uber.
GraphQL (2015 - Present)
Introduced by Facebook in 2015, GraphQL is an innovative query language for APIs. Unlike traditional methods where the server determines the shape of the response, GraphQL empowers the client to request precisely the data they need, thus eliminating data over-fetching and under-fetching.
References
Jesse James Garrett, Ajax: A New Approach to Web Applications, February 18, 2005: This is the paper that introduced AJAX
XML Soap, W3 Schools.
SOAP Version 1.2 Part 1: Messaging Framework, W3C Recommendation 27 April 2007.
The History of REST APIs, readme blog, 15 Nov 2016.
REST vs. SOAP, RedHat, Published April 8, 2019.
Apache Thrift, Project page.
gRPC: A high performance, open source universal RPC framework, Project page.
baeldung, Introduction to gRPC, July 28, 2023.
Pamoda Wimalasiri, HandsOn Introduction to gRPC with Java, February 11, 2022.
-
This is programmed within the server stub by creating a socket that’s bound to port 0. The getsockname system call allows the program to query the port number and then register it with an RPC name service that clients would contact to look up a specific service. Since remote services could be hosted by other organizations and accessed over a wide area network (the internet), there was a need for a more platform-agnostic way for systems to communicate. Web services like SOAP and XML-RPC emerged, which used XML as the message format. ↩︎