This post is the first in a series dedicated to modern software engineering approaches and TDD on NestJS. The next posts will cover contract testing, testing microservices, and incorporating UI testing into the test suite, building upon knowledge from this and the following posts.
NestJS is a popular Javascript framework that promotes fast and scalable development of backend web applications running on Node.js. Sadly, while NestJS brings a lot of ready-made presets and examples for many modern practices such as microservices, ORM and event-driven architecture, it does not offer a comprehensive solution for testing applications built using them. The documentation focuses heavily on unit-tests for specific services while mocking dependencies, and the developer is left to find their own way in testing integrations with 3rd parties such as databases or message brokers.
An avid reader of my blog will recognize that there’s no mention of my preferred approach to testing - writing fast integrative tests (aka Acceptance Tests or System Tests) which exercise all of the system’s features while faking out any IO (3rd party) dependency. This is achieved by extracting all IO operations to adapters (Repository Pattern / DAO or high level abstractions over event handling) and injecting these adapters from the outside, where the production entry point injects the production implementation, and the test harness injects in-memory fakes instead.
Acceptance Tests shine where the system is rich with features while having a relatively small number of distinct IO operations. With a system that mostly deals with integrations between third parties with little logic of its own, extracting IO ops to adapters will leave very little in the production code, thus rendering the acceptance tests moot.
Consider an e-commerce backend; typically such a system will consist of a ProductCatalog, an OrderManager and a Cart. The user adds products to their cart, then checks out the cart, creating an order. In the following example, we use MongoDB to store products and orders, and for the sake of simplicity, all services reside in a single monolithic HTTP server. We’ll write a test for the aforementioned flow using Supertest, as recommended by the NestJS documentation:
test('a customer can order a product', async () => {
const {app, productRepo, orderRepo} = await createTestHarness();
const product = await productRepo.create(aProduct());
const cartId = someRandomString();
await app
.post(`/cart/${cartId}`)
.send({productId: product.id})
.expect(201);
await app
.get(`/cart/${cartId}`)
.expect({id: cartId, items: [{
productId: product.id,
price: product.price,
name: product.title
}]});
const orderId = await app
.post(`/checkout/${cartId}`)
.expect(201)
.then(response => response.text);
const order = await orderRepo.findById(orderId);
expect(order).toMatchObject(expect.objectContaining({
items: expect.arrayContaining([
expect.objectContaining({
productId: product.id,
})
])
}));
});
The test begins by starting the system up via a test harness. In this example, app is the Supertest object which we use to interact with the system and invoke expectations. We then proceed to create a product via the ProductRepository (since in this flow we are the Customer rather than the Merchant), then add it to the cart via an API call. We then request the cart via a second API call and assert that it contains the product we just added. A third API call checks out the cart (for the sake of this example we can assume a single-click checkout has been preconfigured by the customer). Finally, we get the order from the OrderRepository to assert the order we just created exists and refers to the appropriate product id.
The test harness is a piece of software that bridges between the test and the System Under Test. It’s responsible for starting it up in a state ready for tests, injecting any fake objects and optionally inserting initial data without which the system cannot operate (feature flags, lookup tables, etc). We can start implementing it before replacing the MongoDB repositories with memory fakes, and as long as we have an instance of MongoDB listening, the tests will pass:
import request from 'supertest';
async function createTestHarness() {
const testingModule = await Test.createTestingModule({
imports: [AppModule],
}).compile();
const productRepo = testingModule.get(MongoDBProductRepository);
const orderRepo = testingModule.get(MongoDBOrderRepository);
const nest = testingModule.createNestApplication();
return {
app: request(nest.getHttpServer()),
productRepo, orderRepo
}
}
The AppModule instantiates all controllers and services and imports the MongoDB module, which provides our two Repository objects:
@Module({
imports: [MongoDBModule.forRoot({
uri: MONGODB_URI,
dbName: "storeDB"
})],
providers: [CartManager],
controllers: [
CartController, ProductController,
OrderController, CheckoutController]
})
export class AppModule {}
type Config = {
uri: string;
dbName: string;
} & Pick<MongoClientOptions, 'connectTimeoutMS' | 'socketTimeoutMS'>
export class MongoDBModule {
static forRoot({uri, dbName, ...config}: Config): DynamicModule {
return {
module: MongoDBModule,
providers: [
{
provide: "storeDB",
useFactory: async () => {
const mongo = await new MongoClient(
uri, config).connect();
return mongo.db(dbName);
},
},
MongoDBProductRepository,
MongoDBOrderRepository,
],
exports: [MongoDBProductRepository, MongoDBOrderRepository]
}
}
}
The MongoDBOrderRepository and MongoDBProductRepository use the simple MongoDB client to interact with the database. You could of course choose an ORM such as Mongoose, but the public interface of the class will look the same:
import { Collection, Db, ObjectId, WithId } from "mongodb";
type MongoOrder = Omit<Order, "id">;
const docToOrder = ({_id, ...rest}: WithId<MongoOrder>) =>
Order.parse({id: _id.toString(), ...rest});
@Injectable()
export class MongoDBOrderRepository {
private orders: Collection<MongoOrder>;
constructor(@Inject("storeDB") db: Db) {
this.orders = db.collection("orders");
}
async create(order: MongoOrder): Promise<Order> {
const res = await this.orders.insertOne({
_id: new ObjectId(), ...order
});
return {
id: res.insertedId.toString(),
...order
}
}
async findById(orderId: string): Promise<Order | null> {
return this.orders.findOne({
_id: {$eq: new ObjectId(orderId)}}
).then(doc => doc ? docToOrder(doc) : null)
}
}
Up to this point, we’re using the production versions of the repositories, which will work well as long as we have an instance of MongoDB available, and as long as each test creates its own data and assumes nothing about anything existing (or not existing) in the database. However, as the system grows, we will start paying a premium in runtime (because of network calls to the database) and complexity, at which point it might prove beneficial to introduce memory fakes. We start by extracting an interface from the MongoDB repositories and implementing it again based on an array of entities. A bit of TypeScript magic helps:
export type OrderRepository = Omit<MongoDBOrderRepository,
"orders">;
export class InMemoryOrderRepository implements OrderRepository {
orders: Order[] = [];
async create(order: Omit<Order, "id">): Promise<Order> {
const created = {...order, id: nanoid()};
this.orders.push(created);
return created;
}
async findById(orderId: string): Promise<Order | null> {
return this.orders.find(({id}) => id === orderId) || null;
}
}
Using the Omit utility type, the compiler can help us assure that the memory fake always implements the same public interface as the MongoDB implementation. If a method is added or changed in the MongoDB implementation, the memory fake will not compile until we make the appropriate change there.
Next, we override the MongoDB implementations in the test harness:
async function createTestHarness() {
const productRepo = new InMemoryProductRepository(products);
const orderRepo = new InMemoryOrderRepository();
const testingModule = await Test.createTestingModule({
imports: [AppModule],
})
.overrideProvider(MongoDBProductRepository)
.useValue(productRepo)
.overrideProvider(MongoDBOrderRepository)
.useValue(orderRepo)
.compile();
const nest = testingModule.createNestApplication();
return {
app: request(nest.getHttpServer()),
productRepo, orderRepo
}
}
Since the test assumes nothing about the implementation of the system, it should still pass. However, since AppModule imports the MongoDBModule, it will attempt to connect to MongoDB as soon as the module is imported, and if we don’t have a database listening, the test will hang until it fails on connection timeout. To solve that, we will need to turn AppModule into a dynamic module that takes an adapters module from the outside:
export class AppModule {
static register(adapters: DynamicModule): DynamicModule {
return {
imports: [adapters],
providers: [CartManager],
controllers: [
CartController, ProductController,
OrderController, CheckoutController]
})
module: AppModule
}
}
}
The production code will call register() with the instance returned from MongoDBModule.forRoot(), and our test code will need to implement its own adapters module providing the memory fakes. However, for this to work, we will need to make a small change to the MongoDBModule as well, explicitly defining a string token for each repository, instead of automagically using the class name:
export class MongoDBModule {
static forRoot({uri, dbName, ...config}: Config) {
return {
module: MongoDBModule,
providers: [
{
provide: "storeDB",
useFactory: ...
},
{
provide: PRODUCT_REPO,
useClass: MongoDBProductRepository
},
{
provide: ORDER_REPO,
useClass: MongoDBOrderRepository,
},
],
exports: [PRODUCT_REPO, ORDER_REPO]
}
}
}
And so our MemoryModule will provide different implementations for these same string tokens. Since both implementations conform to the same interface, the system will function exactly the same:
export class MemoryModule {
static forTests() {
return {
module: MemoryModule,
providers: [
{
provide: PRODUCT_REPO,
useClass: InMemoryProductRepository,
},
{
provide: ORDER_REPO,
useClass: InMemoryOrderRepository,
}
],
exports: [PRODUCT_REPO, ORDER_REPO]
}
}
}
We will also need to explicitly define the token name wherever we @Inject the repositories, and change the type of the dependency to the extracted interface:
@Controller("/order")
export class OrderController {
constructor(@Inject(ORDER_REPO) private repo: OrderRepository){}
@Get("/:orderId")
async getOrder(@Param("orderId") orderId: string) {
return this.repo.findById(orderId);
}
}
Finally, our test harness will now look like this:
async function createTestHarness() {
const tm = await Test.createTestingModule({
imports: [AppModule.register(MemoryModule.forTests())],
})
.compile();
const productRepo = testingModule
.get<InMemoryProductRepository>(PRODUCT_REPO);
const orderRepo = testingModule
.get<InMemoryOrderRepository>(ORDER_REPO);
const nest = testingModule.createNestApplication();
return {
app: request(nest.getHttpServer()),
productRepo, orderRepo
}
}
Using Hexagonal Architecture, we separated between the IO concerns and the functional concerns of our system. Replacing the IO layer with memory fakes allows us to write fast yet comprehensive tests, combining the speed of unit tests with the black-box approach of end-to-end testing. However, this assumes that the memory fakes behave exactly the same as the production implementations. The next post of this series will cover the methodology used to assert this assumption.
Working examples and code used for this post are available here.
Comments