Who are the best API designers

Pragmatic, practically good: RESTful APIs

At first glance, the design of a RESTful API seems very simple. “One noun, four verbs”, programming can be that simple. So briefly identify a resource, see that its manipulation (CRUD operations) is enabled using endpoint and HTTP methods (POST, GET, PUT, DELETE) - and the REST API is ready. As an example, let's imagine an interface via which orders can be placed, queried, changed and deleted (Listing 1).

// Create a new order via http POST and JSON payload // representing the order to create. POST / api / orders [... payload representing an order ...] // Update an existing order with id 123 via http PUT // and JSON payload representing the changed order. PUT / api / orders / 123 [... payload representing the changed order ...] // Delete an existing order with id 123 via http DELETE. DELETE / api / orders / 123 // Retrieve an existing order with id 123 via http GET. GET / api / orders / 123 // Retrieve all existing orders http GET. GET / api / orders

Is that really all? Does the 1: 1 mapping of CRUD operations and HTTP methods really always match? And what do parameterized resource queries (alias filters) for the targeted restriction of the number of hits look like? How do I customize the return format, e.g. B. for use on mobile devices? How is security handled? What happens in case of an error? And how do I ensure a possible evolution of the API? Question after question, but don't be afraid. We will take a step-by-step approach to answering them.

Every beginning is difficult

The API is the developer's UI. Similar to a good UX / UI design, care should be taken when designing an API that the expectations of the user - in this case the developer - are met. But what exactly do these expectations look like? The good news: In the REST environment there are a number of specifications that can and should be used as a guide. This applies, for example, to the correct use of the HTTP methods and the HTTP status code. And if you can't find a specification for a problem, then it doesn't hurt to take a look at well-known and heavily frequented APIs (Facebook, Amazon, Twitter, GIT and Co.) and take a look. As a rule, you will find patterns and best practices there that have become established in the REST community over the past few years.


As shown above, a 1: 1 mapping of CRUD operations and HTTP methods makes sense. This is also correct in most cases. However, according to the HTTP 1.1 specification, there are a few subtleties to be observed. There it says, among other things, that POST creates a child resource at a URL defined by the server. In other words, the server creates a new resource during a POST and assigns it a unique ID, which is usually returned to the calling client using the location header as part of the URL belonging to the new resource (Listing 2).

// POST request to create new order POST / api / orders [... payload representing an order ...] // Response with success code and location header HTTP / 1.1. 201 Created Location: / api / orders /

But what if the ID and thus the URL to be used to identify the resource is to be specified by the client? Here comes the HTTP method PUT in the game. According to the specification, replaces or creates a HTTP PUT a resource at a client-defined url. The following call overwrites an existing order with the ID 123, if available, or alternatively creates a new one with the ID specified by the client (Listing 3).

// PUT request to replace existing PUT / api / orders / 123 [... payload representing an order ...] // Response with success code and location header // 200 OK, if existing order 123 replaced // 201 Created, if new order 123 created HTTP / 1.1. 200 OK or 201 Created Location: / api / orders /

The HTTP method PUT can therefore perform several tasks. Whether the client-side assignment of an ID should be allowed for its own API and a PUT has to be implemented accordingly is a question of the API design. Next PUT There is another, lesser-known HTTP method for changing resources: PATCH. During a PUT leads to a complete replacement of the resource and therefore the payload must always contain the entire resource, it comes with a PATCH only for an update according to the changes specified within the payload. For example, if you want to change the "Comment" field when placing an order, you would have to PUT the entire order including the changed "Comment" field in the payload can be transferred. At a PATCH on the other hand, only the changed field or a corresponding description of the change is sufficient as a payload. What sounds very attractive at first glance is not exactly trivial to implement in practice. Depending on the payload format, evaluating the change requests and then executing them can be quite complex.

Safe and idempotent

The HTTP 1.1 specification has other special features that should be taken into account when designing a RESTFul API. So it characterizes inter alia. some of their methods as safe(GET, HEAD and OPTIONS) and or idempotent (GET, HEAD, OPTIONS, PUT and DELETE). safe-Methods must not change a resource or its representation. In other words: read access to a resource via HTTPGET should never trigger a writing side effect. The following calls - à la RPC - to manipulate an order are therefore taboo:

  • GET / api / orders? Add = item ...
  • GET / api / orders / 1234 / addItem ...

But what if, for example, a GET returns the resource and delivers it at the same time lastAccessed-Field in the database updated? As long as this manipulation only has an impact on the representation within the server-side domain or database, it is permitted. If, on the other hand, the field is passed on to the outside via the interface - we will ignore the question of the meaning here - then the method is no longer safe and therefore does not meet the expectations of the user. Admittedly, the above example looks a bit constructed. It becomes a bit more practical when looking at the idempotent-Methods. The rule here is that multiple use of the method may only lead to a one-time change to the resource representation. Multiple calls to PUT may only change the resource or its representation once, a multiple call of DELETE delete the resource only once.

Only change once and only delete once? Isn't that a matter of course? Not necessarily. If you were to use a PUT z. If, for example, it is implemented in such a way that it adds a new product to an existing order, then a multiple call would also add several products accordingly. And that is exactly what is prohibited. The payload of the PUT thus completely replaces the previous representation of the resource according to the specification. Once replaced, further calls do not lead to any further changes and always return the same HTTP status code as the first call (200 for OK or 204 for no content).

So far so good. But what does idempotent in connection with DELETE? Should a multiple call always return the same HTTP status code here too? Not necessarily. idempotent simply means that the status of the resource on the server may only change once in the event of multiple calls and that the repeated calls should not trigger any further side effects. idempotent, on the other hand, does not make any statements about the return value. A second deletion attempt can therefore return a different HTTP status code (e.g. 404 for not found) than the first, in order to signal to the client that the call does not make any sense.

Incidentally, the above-mentioned PATCH method is neither safe still idempotent. This is because their payload is freely definable. Not only individual fields of the resource to be changed are allowed, but also change instructions, for example. A call from PATCH with the JSON structure according to RFC 6902 shown in Listing 3, for example, the item with the ID 123 would be overwritten in the order and a second would be added to the order. A multiple call would accordingly result in the item being added to the order multiple times:

[{"op": "add", "path": "/ items", "value": [... some item data ...]}, {"op": "replace", "path": " / items / 123 "," value ": [... item 123 data ...]}]

Filters and Co.

By definition, querying individual resources or all resources of a type is very easy in REST. But what if you want to set a specific filter or limit for a query? After all, as a client, you don't necessarily want to load the entire database over the line with one query. And even if this is the intention of the client, the server should prevent this and artificially limit the number of hits.

Even if there is no specific specification for filtering queries, some best practices have established themselves in the REST community. In the simplest variant, the fields to be filtered and their values ​​are specified as query parameters. For example, the queries shown in Listing 4 would deliver all open orders, all orders over 100 euros or the first ten orders.

// Ask for all open orders GET / api / orders? State = open // Ask for all orders more expensive // ​​than 100.00 Euro GET /api/orders?state=open&amount>=100.00 // Ask for the first 10 orders GET / api / orders? state = open & offset = 0 & limit = 10 // offset may be optional if 0

The whole thing can of course be combined as desired by concatenating the individual filters. The result set can also be sorted by specifying a specific query parameter (e.g. sort) as well as the desired sorting (e.g. +/- or asc / desc possible): GET / api / orders? Sort = -status, + price.

If you do not want to receive all fields of the order as a result, this can also be done using an additional query parameter (e.g. fields for fields to be integrated or exclude for fields to be ignored):

  • GET / api / orders? Fields = id, status, price, date
  • GET / api / orders? Exclude = id, date

What still looks simple in detail can quickly become so complex in combination that it is hardly manageable. For example, let's imagine that we want to allow both an “and” and an “or” in the combined query. And let's imagine that we also want to be able to combine groups of filters and their linkage as desired via “and” or “or” and thereby restrict the return amount of the fields for mobile devices. For example, how would the URL for “all orders a) from the last two days AND with a goods value over 100 euros OR the status open OR in processing OR b) with a goods value under 100 euros AND today's order date AND the status open, sorted by DATE, STATUS and VALUE OF GOODS “prepared for the mobile version? With such complex queries you are quickly tempted to reinvent your own query language - including parser - and thus reinvent the wheel. The first remedy here, at least for standard queries, is to use aliases for both predefined filter and field combinations. These are then to be understood as a kind of virtual resource. Alternatively, the desired restriction of the fields can also be done via the HTTP header (Prefer) are signaled (Listing 5).

// Use virtual resource open_orders as // an alias for / api / orders? Status = open GET / api / open_orders // Use style = mobile parameter as an alias // for a predefined set of field filters GET / api / orders? style = mobile // Use Prefere-Header return = mobile-format // as an alias for a predefined set of field filters // in combination with open_orders alias GET / api / open_orders Prefer: return = mobile-format

A much more flexible alternative is the RQL (Resource Query Language) based on FIQL (Feed Item Query Language). With the help of this Object-style Query Language and the associated parsers, any queries can be mapped to existing resources and directly to JS arrays, SQL , MongoDB or Elasticsearch. Thanks to the Java parser and JPA Criteria Builder, a simple transfer to a JPA query is also possible. RQL is therefore an optimal choice for complex queries including filters on individual resources.

But this variant also reaches its limit at some point. Namely whenever a query is made not only on a resource, but as a kind Join should take place on multiple resources. This problem usually results in the client issuing the queries in multiple round trips. For example, if we want to inquire about all the orders from the last week from all branches that are in a certain postcode area, then we would first inquire about the appropriate branches and then their orders for each branch. The merging of the results would take place accordingly on the client - the classic N + 1 dilemma. At least now we have reached a point where we should look around for a viable alternative - beyond REST. GraphQL from Facebook is worth more than just a look! With the help of GraphQL, any queries can be placed on an object graph and the desired result fields of the resources involved can be combined and queried in a targeted manner. Regardless of the number of resources, the result can be queried with just a single round trip.


If a client requests a potentially large amount of resources, for example all orders, then the request should be restricted from the outset. As a rule, pagination, i.e. the specification of a page, is used for this. In general, this can be done in two different ways (Listing 6). In the first variant, the client gives a specific page number within the request, e.g. B. as a query parameter, with. This implies that the server determines how many hits should exist per page and calculates the offset accordingly. In the second variant, the client transfers both the offset and the maximum number of resources to be transferred. This requires a little more intelligence on the client, but in return it entails significantly more flexibility.

// Pagination variant 1: // Use concrete page numbers, calculate // offset and limit on client side GET / api / open_orders? Page = 3 // Pagination variant 2: // Calculate offset and limit on client // side and use values ​​as query parameter GET / api / orders? offset = 30, limit = 10

Pagination is always interesting when you need to navigate through larger amounts of data. For example, let's imagine a table within a Single Page Application (SPA) within which we can page back and forth. It should also be possible to jump to the first and last page of the table. In order to correctly calculate the links of the corresponding navigation buttons, the client needs some information. Wouldn't it be nice if the server could do this work for him? No problem. For this purpose, the server only has to send corresponding link references in the response. These can either be generated as part of the payload or alternatively as link headers (Listing 7).

// get "page 3" and info about PREV / NEXT GET / api / orders? offset = 20 & limit = 10 HTTP / 1.1 // Response with success code and link header for // navigation purpose HTTP / 1.1. 206 Partial Content Link: <... / api / orders? Offset = 0 & limit = 10>; rel = "first" <... / api // orders? offset = 10 & limit = 10>; rel = "prev", <... / api // orders? offset = 30 & limit = 10>; rel = "next", <... / api // orders? offset = 40 & limit = 3>; rel = "last"


As already shown in several examples, HTTP headers play a very special role in the REST environment. But what belongs in the body and what in the header? And when should a path or query parameter be used within the URL and when should a header be used?

In general, it can be said that the header should be used for global metadata and the body for business or request-specific information. The same applies to the parameters. While path or query parameters should be used for resource-specific parameters, the HTTP header is used to exchange general metadata. So is found in the Path-Parameter, for example, the specification of a sub-resource or a resource ID and a specific filter including filter value in the query parameter. In the header, on the other hand, information on the exchange format (accept, content type), security (authorization header) or - as already seen - on possible further actions (link header) are made. Incidentally, the use of HTTP headers has the great advantage that the header values ​​can be accessed in a targeted manner without the entire payload having to be parsed.

Of course, in addition to the use of standardized headers, it is also possible to specify custom headers in order to exchange self-defined metadata of your own interface.In the REST community, an “X-” is usually used as a prefix for such custom headers, to indicate that it is not a standardized header. However, the use of the prefix "X-" has been deprecated since 2012. The reason for this is that a later conversion of the individual header into a standard, and the associated removal of the "X -" prefix, would break downward compatibility. A good example of this is the GZIP header, which is currently used by clients and servers both in the form x-gzip as well as gzip must be supported. Now every RESTful API designer can ask himself how likely it is that a proprietary header of his own API will ever make it into an open standard, and pragmatically weigh up whether or not to use the "X -" prefix.

Status codes

At least as important and helpful as the correct use of HTTP methods and headers is the targeted use of the available HTTP status codes. It is no coincidence that in addition to the three most common codes 200 (OK), 400 (Bad Request) and 500 (Internal Server Error) there is a whole list of other useful codes.

First of all, you should make sure that the server returns a code of the correct number range. While the group of 100 status codes (information) indicate that the processing of the request is still ongoing, the 200 (successful operations) signal that the request has been processed correctly. The group of 300 status codes (redirection) in turn shows the client that further steps are necessary for the client to process the request successfully. The group of 400 status codes (client errors) indicates problems with the request, while the 500 (server errors) indicate that the server is not able to process the request in a meaningful way. By choosing the correct number group, the client is shown, among other things, whether a repetition of the request - possibly with changed request data - makes sense or not.

The expectations of the API user should also be the focus when using the status codes. If, for example, he asks for the list of all open orders and receives an empty list and the status code 200 in response, he cannot be sure whether the payload is deliberately empty or whether there is an error on the server side. On the other hand, it looks different if the server delivers an empty list and the code 204 (No Content). In this case, the API user is deliberately deprived of any room for interpretation - i.e. potential sources of error. The same applies if the API user only requests a limited subset via pagination. In this case, the server should deliver code 206 (partial content) instead of 200, signaling that there are more hits waiting on the server side and that their links can be found in the header or the payload. If the API user creates a new resource, the server should acknowledge this with 201 (Created) instead of just 200. Here, too, the specific code is much more meaningful than the general one. If the server cannot process the request directly, i.e. synchronously, but can only initiate its processing asynchronously, this should be signaled by code 202 (Accepted). In this way, the client knows that the server has not yet necessarily made a change to the resource, but that this will definitely take place.

In addition to the codes from area 200 shown so far, the various codes from the other areas should of course also be used in a targeted manner. A 401 (Unauthorized) or 403 (Forbidden) certainly has a completely different meaning than the more generic error code 400. The same applies to 404 (Not Found), 405 (Method Not Allowed) and 429 (Too Many Requests).
A really good interactive overview of the individual HTTP status codes including their meaning and potential application scenarios can be found on the Restlet website.

Caching and Security

In addition to the topics shown so far, there are many, many other aspects that need to be considered when designing a "good" API. For example, a sophisticated caching concept helps to avoid outgoing client requests or, alternatively, to serve them through upstream content delivery networks, proxies or other servers from their cache. A means to an end is again an on-board means of HTTP. While in HTTP 1.0 the caching behavior via ExpiresHeader can only be controlled relatively roughly, leaves in HTTP 1.1 the Cache control-Headers significantly more options too. The option is also available via If-Modified-Since-Header (plus Last-Modified) or If-none-matchHeader (plus ETag) Conditional GET to the server. If there are no newer resources available on the server, it responds with the status code 304 (Not Modified) and an empty body. Otherwise the request will be like a normal one GETtreated.

The topic of security must also be re-examined in the REST environment. This is especially true when systems that were previously only used internally are opened to the outside via API. As a RESTful service by definition statelessshould be and therefore there is no session on the server side, the challenge arises of how the information necessary for authentication and authorization can be transferred within the requests without the RESTful service having to send a new authentication request to an authentication service for each request. A possible and now quite established solution provides that the client authenticates itself once on an authentication server and receives from there a time-limited signed token (Fig. 1).

Fig. 1: Token-based authentication

This token is then sent to the RESTful service within the authentication header, where its validity can be verified. In order for this to work, the authentication server and the RESTful service exchanged a public key in advance. If a JSON Web Token (JWT) is used as a token, additional data can be stored within the token as key-value pairs in the form of so-called claims (Fig. 2). This is of particular interest in the context of REST-based microservices, since such general data, such as the roles of a user, can be passed easily and efficiently through the microservices involved in a request.

Fig. 2: JWT including claims

Academic vs. Pragmatic REST

As the previous article shows, it's not that difficult to design a good RESTful API. It is important not to try to reinvent the wheel, but to fall back on standards and established patterns and best practices in order to meet the expectations of the user. Of course, it is also important that design decisions that have been made are used consistently within the API. But why are there still heated discussions about the correct use of REST?

Probably very few developers of RESTful APIs have ever taken a deeper look into Roy Thomas Fielding's dissertation from 2000. In Chapter 5, Fielding describes a “new” approach to network-based software architectures called REST (Representational State Transfer). The interesting thing is that in the dissertation we read very little about endpoints, HTTP methods or even JSON or XML. Rather, the work is about an architectural approach that is characterized by terms such as client-server, stateless, caching, uniform interfaces, layered system and code on demand. What most of us understand by REST, namely the identification of a resource by means of a unique URL and its manipulation through a representation of the resource (JSON or XML) in interaction with self-explanatory messages (aka HTTP methods), plays a subordinate role in the work Role and is only referred to there as one of several points with "the four interface constraints". At "four" one or the other attentive reader will pause and ask: "But I've only seen three here so far":

  1. Identification via URL
  2. Manipulation via representation (e.g. JSON)
  3. Self-describing messages (HTTP methods)

And this is exactly where the crux of the matter lies. Fielding lists “Hypermedia as the engine of application state” (HATEOAS) as the fourth constraint. The following two quotes from Fielding show the importance he personally attaches to the terms hypermedia and hypertext in the context of REST:

  1. "If the engine of application state (and hence the API) is not driven by hypertext, then it cannot be RESTful and cannot be a REST API".
  2. "A REST API should be entered with no prior knowledge beyond the initial URI ... From that point on, all application state transitions must be driven by the client selection of server-provides choices ..."

Fielding says that a RESTful API should be able to be used without prior knowledge beyond the initial URL. Calling this URL provides a list of links (as part of the payload or alternatively as link headers) with possible and meaningful operations.


Admittedly, the scenario outlined above probably sounds a bit abstract to many at first. It's exactly what we see on the internet every day, isn't it? REST is nothing more than the abstraction of the behavior of the World Wide Web that we know. We call up a URI and receive - in addition to one or the other piece of information - a list of links that show us which operations are possible or permitted at precisely that moment.

We already got to know this procedure with the navigation via pagination outlined above. If you ask an API for a section of a larger amount of data, link references allow you to navigate within the amount of data. The user of the API only knows the semantics of the references in advance - prev corresponds back, next corresponds to forward - but not the specific URLs. With RFC 5988, there has even been a standard since 2010 that defines link names and their semantics accordingly. So please don't reinvent the wheel, but if possible and sensible, use one of the link names listed there.

When navigating through a large amount of data, it is still easy to imagine that the API can provide generic links for next, previous, first or last page. But how is that supposed to work with the normal resources of a RESTful API? Let's take another look at our example of placing, changing, deleting and querying orders. The initial call to the only known URI http://api.myshops.com/ would provide us with an HTTP response with the code 204 (No Content) and a link reference with the name edit and the url http://api.myshops.com/orders. Based on the RFC mentioned above, we know that we can place an order with this URL (via HTTP POST).

It is crucial for understanding that we of course already knew beforehand that orders could be handled via the shop's API and that we also knew the necessary JSON format for this. The specific URLs to be used are not known to us and actually do not matter. If we now place an order with the help of the returned URL, we get back link references in addition to the confirmation (204 No Content or 200 Ok), which signal to us what can be done with the order that has just been placed.

One of the link references is typically self with the reference to the created resource itself. In our case that would be for example http://api.myshops.com/orders/123. With the help of this link you can now query the status of the order at any time. Another link reference labeled payment - also part of RFC 5988 - could signal us how we can pay for the open order: http://api.myshops.com/payments/123. Depending on the action that has just been carried out and the associated server-side changes to the application state, the links lead us step by step through the application or, in our case, through the possible use cases of the ordering process. For the exchange of the necessary reference information, the Hypertext Application Language (HAL]) has been established in recent years. If the whole thing is still a little too abstract, the HATEOAS demo from Heroku including the generic HAL browser is recommended.

If it's not REST ...

Hand on heart: Who of us goes as far in all of our RESTful APIs as described in the above scenario? I don't think so. But how is your own API to be classified on the REST Richter scale, which is open to the top. Can I still use the word REST in connection with my API without risking a shit storm from the corner of the REST purists?

Leonard Richardson's maturity model can be used as a small litmus test for classifying your own web service. In his model, Richardson classifies web services into three different REST maturity levels depending on their support for URIs, HTTP and hypermedia (Fig. 3). Actually there are even four, as there is a level 0 below the three legitimate levels, which is only XML or JSON via HTTP POST over the line and thus corresponds more to the classic RPC model. This level is also often referred to as RESTless, as it has almost nothing in common with REST.

Fig. 3: Richardson Maturity Model

In level 1, Richardson introduces the resources known from REST and thus different endpoints per resource. With this model, there are usually several URIs in an application. However, only one HTTP method (mostly POST) is still used and the extended options of the HTTP protocol, such as headers, return codes, caching, etc. are also dispensed with. Level 2 starts exactly where level 1 ends. Operations on the resources are assigned to the various HTTP methods (querying the resource via GET, Creation via POSTor PUT, Change via PUT, partial change via PATCHand delete via DELETE). And the HTTP status codes are also used sensibly to signal what happened to the resources on the server - or not. Only in level 3 - aka "the glory of REST" - does Richardson introduce hypermedia and thus a self-explanatory system.


Okay, so let's agree that - according to the pure teaching - probably very few of us have so far climbed the holy REST-Olympus. But is that really that bad? It is important that in the end a speaking API is created that is understandable for the user, i.e. the developer, and that brings with it a corresponding stability. Vinay Sahni, founder of Enchant, wrote on his blog: "An API is a developer's UI - just like any UI, it's important to ensure the user's experience is thought out carefully!" hardly formulate.

But can I call my API RESTful at all if, according to Richardson, I have only reached a level below 3? Personally, I'm more pragmatic than religious. If my API fulfills the conditions of Level 2, then I would definitely call it RESTful, but not with Level 1.

But what if I meet an avowed Level 3 representative who does not accept any truth besides his? Or to put it another way: In such a situation, should I insist on the devil that I designed a RESTful API? Maybe we are just missing a term for the formula RESTful minus HATEOAS? How about RESTalike?

Developer magazine

This article was published in the developer magazine.

Of course, you can also read the developer magazine digitally in the browser or on your Android and iOS devices via the developer.kiosk. The Developer Magazine is also available in our shop by subscription or as a single issue.