Connect with us


Construct a Relaxation API for the Jamstack with Hapi and TypeScript – SitePoint

The Jamstack has a pleasant approach of separating the entrance finish from the again finish to the place the whole resolution doesn’t need to ship in a single monolith — and all at the very same time. When the Jamstack is paired with a REST API, the shopper and the API can evolve independently. This implies each back and front ends will not be tightly coupled, and altering one doesn’t essentially imply altering the opposite.

On this article, I’ll check out a REST API from the attitude of the Jamstack. I’ll present methods to evolve the API with out breaking present purchasers and cling to REST requirements. I’ll choose Hapi because the software of option to construct the API, and Joi for endpoint validations. The database persistence layer will go in MongoDB through Mongoose to entry the info. Take a look at-driven improvement will assist me iterate via modifications and supply a fast strategy to get suggestions with much less cognitive load. On the finish, the aim is so that you can see how REST, and the Jamstack, can present an answer with excessive cohesion and low coupling between software program modules. One of these structure is finest for distributed programs with numerous microservices every on their very own separate domains. I’ll assume a working information of NPM, ES6+, and a primary familiarity with API endpoints.

The API will work with writer knowledge, with a reputation, e mail, and an non-obligatory 1:N (one-to-few through doc embedding) relationship on favourite subjects. I’ll write a GET, PUT (with an upsert), and DELETE endpoints. To check the API, any shopper that helps fetch() will do, so I’ll choose Hoppscotch and CURL.

I’ll hold the studying stream of this piece like a tutorial the place you possibly can observe alongside from prime to backside. For individuals who’d relatively skip to the code, it’s accessible on GitHub in your viewing pleasure. This tutorial assumes a working model of Node (ideally the newest LTS) and MongoDB already put in.

Preliminary Setup

To begin the undertaking up from scratch, create a folder and cd into it:

mkdir hapi-authors-rest-api
cd hapi-authors-rest-api

As soon as contained in the undertaking folder, fireplace up npm init and observe the immediate. This creates a package deal.json on the root of the folder.

Each Node undertaking has dependencies. I’ll want Hapi, Joi, and Mongoose to get began:

npm i @hapi/hapi joi mongoose --save-exact
  • @hapi/hapi: HTTP REST server framework
  • Joi: highly effective object schema validator
  • Mongoose: MongoDB object doc modeling

Examine the package deal.json to verify all dependencies and undertaking settings are in place. Then, add an entry level to this undertaking:

"scripts": {
  "begin": "node index.js"

MVC Folder Construction with Versioning

For this REST API, I’ll use a typical MVC folder construction with controllers, routes, and a database mannequin. The controller may have a model like AuthorV1Controller to permit the API to evolve when there are breaking modifications to the mannequin. Hapi may have a server.js and index.js to make this undertaking testable through test-driven improvement. The take a look at folder will include the unit exams.

Beneath is the general folder construction:

┣━┓ config
┃ ┣━━ dev.json
┃ ┗━━ index.js
┣━┓ controllers
┃ ┗━━ AuthorV1Controller.js
┣━┓ mannequin
┃ ┣━━ Creator.js
┃ ┗━━ index.js
┣━┓ routes
┃ ┣━━ authors.js
┃ ┗━━ index.js
┣━┓ take a look at
┃ ┗━━ Creator.js
┣━━ index.js
┣━━ package deal.json
┗━━ server.js

For now, go forward and create the folders and respective recordsdata inside every folder.

mkdir config controllers mannequin routes take a look at
contact config/dev.json config/index.js controllers/AuthorV1Controller.js mannequin/Creator.js mannequin/index.js routes/authors.js routes/index.js take a look at/Authors.js index.js server.js

That is what every folder is meant for:

  • config: configuration information to plug into the Mongoose connection and the Hapi server.
  • controllers: these are Hapi handlers that cope with the Request/Response objects. Versioning permits a number of endpoints per model quantity — that’s, /v1/authors, /v2/authors, and so forth.
  • mannequin: connects to the MongoDB database and defines the Mongoose schema.
  • routes: defines the endpoints with Joi validation for REST purists.
  • take a look at: unit exams through Hapi’s lab software. (Extra on this later.)

In an actual undertaking, you might discover it helpful to summary frequent enterprise logic right into a separate folder, say utils. I like to recommend making a AuthorUtil.js module with purely purposeful code to make this reusable throughout endpoints and simple to unit take a look at. As a result of this resolution doesn’t have any advanced enterprise logic, I’ll select to skip this folder.

One gotcha to including extra folders is having extra layers of abstraction and extra cognitive load whereas making modifications. With exceptionally massive code bases, it’s straightforward to get misplaced within the chaos of layers of misdirection. Typically it’s higher to maintain the folder construction as easy and as flat as potential.


To enhance the developer expertise, I’ll now add TypeScript sort declarations. As a result of Mongoose and Joi outline the mannequin at runtime, there’s little worth in including a sort checker at compile time. In TypeScript, it’s potential so as to add sort definitions to a vanilla JavaScript undertaking and nonetheless reap the advantages of a sort checker within the code editor. Instruments like WebStorm or VS Code will choose up sort definitions and permit the programmer to “dot” into the code. This system is usually referred to as IntelliSense, and it’s enabled when the IDE has the categories accessible. What you get with this can be a good strategy to outline the programming interface so builders can dot into objects with out wanting on the documentation. The editor too will typically present warnings when builders dot into the flawed object.

That is what IntelliSense appears like in VS Code:

In WebStorm, that is referred to as code completion, nevertheless it’s basically the identical factor. Be happy to choose whichever IDE you favor to put in writing the code. I take advantage of Vim and WebStorm, however you might select otherwise.

To allow TypeScript sort declarations on this undertaking, fireplace up NPM and save these developer dependencies:

npm i @sorts/hapi @sorts/mongoose --save-dev

I like to recommend maintaining developer dependencies separate from app dependencies. This manner, it’s clear to different devs within the group what the packages are meant for. When a construct server pulls down the repo, it additionally has the choice to skip packages the undertaking doesn’t want at runtime.

With all of the developer niceties in place, it’s now time to start out writing code. Open the Hapi server.js file and put in place the primary server:

const config = require('./config')
const routes = require('./routes')
const db = require('./mannequin')
const Hapi = require('@hapi/hapi')

const server = Hapi.server({
  port: config.APP_PORT,
  host: config.APP_HOST,
  routes: {
    cors: true


exports.init = async () => {
  await server.initialize()
  await db.join()
  return server

exports.begin = async () => {
  await server.begin()
  await db.join()
  console.log(`Server operating at: ${server.information.uri}`)
  return server

course of.on('unhandledRejection', (err) => {
  course of.exit(1)

I’ve enabled CORS by setting cors to true so this REST API can work with Hoppscotch.

To maintain it easy, I’ll forgo semicolons on this undertaking. It’s considerably releasing to skip a TypeScript construct on this undertaking and typing that further character. This follows the Hapi mantra, as a result of it’s all in regards to the developer happiness anyway.

Beneath config/index.js, remember to export the dev.json information:

module.exports = require('./dev')

To flesh out configuring the server, put this in dev.json:

  "APP_PORT": 3000,
  "APP_HOST": ""

REST Validation

To maintain the REST endpoints following the HTTP requirements, I’ll add Joi validations. These validations assist to decouple the API from the shopper, as a result of they implement useful resource integrity. For the Jamstack, this implies the shopper now not cares about implementation particulars behind every useful resource. It’s free to deal with every endpoint independently, as a result of the validation will guarantee a legitimate request to the useful resource. Adhering to a strict HTTP normal makes the shopper evolve primarily based on a goal useful resource that sits behind an HTTP boundary, which enforces the decoupling. Actually, the aim is to make use of versioning and validations to maintain a clear boundary within the Jamstack.

With REST, the primary aim is to take care of idempotency with the GET, PUT, and DELETE strategies. These are protected request strategies as a result of subsequent requests to similar useful resource don’t have any unintended effects. The identical meant impact will get repeated even when the shopper fails to determine a connection.

I’ll select to skip POST and PATCH, since these aren’t protected strategies. That is for the sake of brevity and idempotency, however not as a result of these strategies tight couple the shopper in any approach. The identical strict HTTP requirements can apply to those strategies, besides that they don’t assure idempotency.

In routes/authors.js, add the next Joi validations:

const Joi = require('joi')

const authorV1Params = Joi.object({
  id: Joi.string().required()

const authorV1Schema = Joi.object({
  identify: Joi.string().required(),
  e mail: Joi.string().e mail().required(),
  subjects: Joi.array().objects(Joi.string()), 

Word that any modifications to the versioned mannequin will probably want a brand new model, like a v2. This ensures backwards compatibility for present purchasers and permits the API to evolve independently. Required fields will fail the request with a 400 (Dangerous Request) response when there are fields lacking.

With the params and schema validations in place, add the precise routes to this useful resource:

const v1Endpoint = require('../controllers/AuthorV1Controller')

module.exports = [{
  method: 'GET',
  path: '/v1/authors/{id}',
  handler: v1Endpoint.details,
  options: {
    validate: {
      params: authorV1Params
    response: {
      schema: authorV1Schema
}, {
  method: 'PUT',
  path: '/v1/authors/{id}',
  handler: v1Endpoint.upsert,
  options: {
    validate: {
      params: authorV1Params,
      payload: authorV1Schema
    response: {
      schema: authorV1Schema
}, {
  method: 'DELETE',
  path: '/v1/authors/{id}',
  handler: v1Endpoint.delete,
  options: {
    validate: {
      params: authorV1Params

To make these routes accessible to the server.js, add this in routes/index.js:

module.exports = [

The Joi validations go within the choices area of the routes array. Every request path takes in a string ID param that matches the ObjectId in MongoDB. This id is a part of the versioned route as a result of it’s the goal useful resource the shopper must work with. For a PUT, there’s a payload validation that matches the response from the GET. That is to stick to REST requirements the place the PUT response should match a subsequent GET.

That is what it says in the usual:

A profitable PUT of a given illustration would recommend {that a} subsequent GET on that very same goal useful resource will lead to an equal illustration being despatched in a 200 (OK) response.

This makes it inappropriate for a PUT to assist partial updates since a subsequent GET wouldn’t match the PUT. For the Jamstack, it’s necessary to stick to HTTP requirements to make sure predictability for purchasers and decoupling.

The AuthorV1Controller handles the request through a technique handler in v1Endpoint. It’s a good suggestion to have one controller for every model, as a result of that is what sends the response again to the shopper. This makes it simpler to evolve the API through a brand new versioned controller with out breaking present purchasers.

The Creator’s Database Assortment

The Mongoose object modeling for Node first wants a MongoDB database put in. I like to recommend setting one up in your native dev field to play with MongoDB. A minimal set up solely wants two executables, and you may get the server up and operating in about 50 MB. That is the true energy of MongoDB, as a result of a full database can run in dust low cost {hardware} like a Raspberry PI, and this scales horizontally to as many bins as wanted. The database additionally helps a hybrid mannequin the place the servers can run each on the cloud and on-prem. So, no excuses!

Contained in the mannequin folder, open up index.js to arrange the database connection:

const config = require('../config')
const mongoose = require('mongoose')

module.exports = {
  join: async perform() {
    await mongoose.join(
      config.DB_HOST + "" + config.DB_NAME,
  connection: mongoose.connection,
  Creator: require('./Creator')

Word the Creator assortment will get outlined in Creator.js on this similar folder:

const mongoose = require('mongoose')

const authorSchema = new mongoose.Schema({
  identify: String,
  e mail: String,
  subjects: [String],
  createdAt: Date

if (!authorSchema.choices.toObject) authorSchema.choices.toObject = {}
authorSchema.choices.toObject.rework = perform(doc, ret) {
  delete ret._id
  delete ret.__v
  if (ret.subjects && ret.subjects.size === 0) delete ret.subjects
  return ret

module.exports = mongoose.mannequin('Creator', authorSchema)

Take into accout the Mongoose schema doesn’t replicate the identical necessities because the Joi validations. This provides flexibility to the info, to assist a number of variations, in case any individual wants backwards compatibility throughout a number of endpoints.

The toObject rework sanitizes the JSON output, so the Joi validator doesn’t throw an exception. If there are any further fields, like _id, that are within the Mongoose doc, the server sends a 500 (Inner Server Error) response. The non-obligatory area subjects will get nuked when it’s an empty array, as a result of the GET should match a PUT response.

Lastly, set the database configuration in config/dev.json:

  "APP_PORT": 3000,
  "APP_HOST": "",
  "DB_HOST": "mongodb://",
  "DB_NAME": "hapiAuthor",
  "DB_OPTS": {
    "useNewUrlParser": true,
    "useUnifiedTopology": true,
    "poolSize": 1

Habits-driven Improvement

Earlier than fleshing out the endpoints for every methodology within the controller, I like to start by writing unit exams. This helps me conceptualize the issue at hand to get optimum code. I’ll do pink/inexperienced however skip the refactor and go away this as an train to you in order to not belabor the purpose.

I’ll choose Hapi’s lab utility and their BDD assertion library to check the code as I write it:

npm i @hapi/lab @hapi/code --save-dev

In take a look at/Creator.js add this primary scaffold to the take a look at code. I’ll choose the behavior-driven improvement (BDD) type to make this extra fluent:

const Lab = require('@hapi/lab')
const { count on } = require('@hapi/code')
const { after, earlier than, describe, it } = exports.lab = Lab.script()
const { init } = require('../server')
const { connection } = require('../mannequin')

const id = '5ff8ea833609e90fc87fee52'

const payload = {
  identify: 'C R',
  e mail: 'xyz@abc.internet',
  createdAt: '2021-01-08T06:00:00.000Z'

describe('/v1/authors', () => {
  let server

  earlier than(async () => {
    server = await init()

  after(async () => {
    await server.cease()
    await connection.shut()

As you construct extra fashions and endpoints, I like to recommend repeating this similar scaffold code per take a look at file. Unit exams will not be DRY (“don’t repeat your self”), and it’s completely wonderful to start out/cease the server and database connection. The MongoDB connection and the Hapi server can deal with this whereas maintaining exams snappy.

Checks are nearly able to run apart from a minor wrinkle in AuthorV1Controller1, as a result of it’s empty. Crack open controllers/AuthorV1Controller.js and add this:

exports.particulars = () => {}
exports.upsert = () => {}
exports.delete = () => {}

The exams run through npm t within the terminal. Remember to set this in package deal.json:

"scripts": {
  "take a look at": "lab"

Go forward and fireplace up unit exams. There ought to be nothing failing but. To fail unit exams, add this inside describe():

it('PUT responds with 201', async () => {
  const { statusCode } = await server.inject({
    methodology: 'PUT',
    url: `/v1/authors/${id}`,
    payload: {...payload}
  count on(statusCode).to.equal(201)

it('PUT responds with 200', async () => {
  const { statusCode } = await server.inject({
    methodology: 'PUT',
    url: `/v1/authors/${id}`,
    payload: {
      subjects: ['JavaScript', 'MongoDB']}
  count on(statusCode).to.equal(200)

it('GET responds with 200', async () => {
  const { statusCode } = await server.inject({
    methodology: 'GET',
    url: `/v1/authors/${id}`
  count on(statusCode).to.equal(200)

it('DELETE responds with 204', async () => {
  const { statusCode } = await server.inject({
    methodology: 'DELETE',
    url: `/v1/authors/${id}`
  count on(statusCode).to.equal(204)

To begin passing unit exams, put this inside controllers/AuthorV1Controller.js:

const db = require('../mannequin')

exports.particulars = async (request, h) => {
  const writer = await db.Creator.findById(
  request.log(['implementation'], `GET 200 /v1/authors ${writer}`)
  return h.response(writer.toObject())

exports.upsert = async (request, h) => {
  const writer = await db.Creator.findById(

  if (!writer) {
    const newAuthor = new db.Creator(request.payload)
    newAuthor._id =
    request.log(['implementation'], `PUT 201 /v1/authors ${newAuthor}`)
    return h

  writer.identify = request.payload.identify
  writer.e mail = request.payload.e mail
  writer.subjects = request.payload.subjects
  request.log(['implementation'], `PUT 200 /v1/authors ${writer}`)
  return h.response(writer.toObject())

exports.delete = async (request, h) => {
  await db.Creator.findByIdAndDelete(
    `DELETE 204 /v1/authors ${}`)
  return h.response().code(204)

A few issues to notice right here. The exec() methodology is what materializes the question and returns a Mongoose doc. As a result of this doc has further fields the Hapi server doesn’t take care of, apply a toObject earlier than calling response(). The API’s default standing code is 200, however this may be altered through code() or created().

With pink/inexperienced/refactor test-driven improvement, I solely wrote the minimal quantity of code to get passing exams. I’ll go away writing extra unit exams and extra use circumstances to you. For instance, the GET and DELETE ought to return a 404 (Not Discovered) when there’s no writer for the goal useful resource.

Hapi helps different niceties, like a logger that’s contained in the request object. As a default, the implementation tag sends debug logs to the console when the server is operating, and this additionally works with unit exams. It is a good clear strategy to see what’s taking place to the request because it makes its approach via the request pipeline.


Lastly, earlier than we are able to fireplace up the primary server, put this in index.js:

const { begin } = require('./server')


An npm begin ought to get you a operating and dealing REST API in Hapi. I’ll now use Hoppscotch to fireplace requests to all endpoints. All you must do is click on on the hyperlinks under to check your API. Remember to click on on the hyperlinks from prime to backside:

Or, the identical could be accomplished in cURL:

curl -i -X PUT -H "Content material-Kind:software/json" -d "{"identify":"C R","e mail":"xyz@abc.internet","createdAt":"2021-01-08T06:00:00.000Z"}" http://localhost:3000/v1/authors/5ff8ea833609e90fc87fee52
201 Created {"identify":"C R","e mail":"xyz@abc.internet","createdAt":"2021-01-08T06:00:00.000Z"}

curl -i -X PUT -H "Content material-Kind:software/json" -d "{"identify":"C R","e mail":"xyz@abc.internet","createdAt":"2021-01-08T06:00:00.000Z","subjects":["JavaScript","MongoDB"]}" http://localhost:3000/v1/authors/5ff8ea833609e90fc87fee52
200 OK {"subjects":["JavaScript","MongoDB"],"identify":"C R","e mail":"xyz@abc.internet","createdAt":"2021-01-08T06:00:00.000Z"}

curl -i -H "Content material-Kind:software/json" http://localhost:3000/v1/authors/5ff8ea833609e90fc87fee52
200 OK {"subjects":["JavaScript","MongoDB"],"identify":"C R","e mail":"xyz@abc.internet","createdAt":"2021-01-08T06:00:00.000Z"}

curl -i -X DELETE -H "Content material-Kind:software/json" http://localhost:3000/v1/authors/5ff8ea833609e90fc87fee52
204 No Content material

In Jamstack, a JavaScript shopper could make these calls through a fetch(). The good factor a couple of REST API is that it doesn’t need to be a browser in any respect, as a result of any shopper that helps HTTP will do. That is excellent for a distributed system the place a number of purchasers can name the API through HTTP. The API can stay stand-alone with its personal deployment schedule and be allowed to evolve freely.


The JamStack has a pleasant approach of decoupling software program modules through versioned endpoints and mannequin validation. The Hapi server has assist for this and different niceties, like sort declarations, to make your job extra satisfying.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *