Learning Tests: better than benzos
Updated: Apr 27
Last week I had to integrate AWS Cognito into a client's system. This was the first time I used Cognito, so I started by reading the documentation, then went into the AWS Console, and immediately froze. There were so many configuration items, choices to make, terminology I was not familiar with, and I had no idea how to move forward. My impostor syndrome took over and I was starting to feel the Fight, Flight or Freeze response kicking in. Then I remembered I'm actually pretty good at my job, managed to contain the feeling, frame it in its right place, and move on. I remembered there's a great way to get immediate feedback about my understanding of how an external system works without having to dive into elaborate documentation or technical detail. I remembered that I can write a learning test.
A learning test begin its life as a draft. I want to learn how to use a new SDK or API, and I want to assert that I understand how to use it and that I'm able to communicate with it like I expect. I write a set of tests that document my expectations from this external dependency, and by running the test, I'm able to prove that these expectations are met. In my case, one such expectation was "a user can sign up, sign in, and receive a valid JWT". So I wrote a test that instantiates a Cognito User Pool object (with hard-coded ids of the temporary AWS account I was using for my integration) then calls the APIs for signing up, signing in, and verifying a JWT. Immediately I ran upon a problem: the user needs to be email-verified before it can be used to sign in. The fact that I encountered this problem early on - rather than in the context of a bigger application - allowed me to easily pinpoint the problem and investigate a solution. The workaround, by the way, is a Pre-sign-up Lambda Trigger that automatically marks any user as verified.
A learning test will be messy at first, and that's ok. I don't want to think about design or engineering right now. I want my full focus on making sure my requirements from the dependency are met. It's ok to duplicate code, it's ok to hard-code secrets (to dedicated, ephemeral environments, of course!), and it's ok to make assumptions. All of these will be dealt with after my current task is done - proving that I know how to use the dependency. Then, I refactor. I created a CognitoAuthAdapter class and moved the code from the test into it, moved some things around, created some types to make the API nicer, and et voilà - I now have an integration test for my AuthAdapter. Later, I implemented an in-memory fake for use in acceptance tests, and generalized this test to become a contract test, running against both the fake and real implementations - proving that my fake implementation is equivalent to the Cognito-based one, and can safely be used in fast acceptance tests.
This anecdote is a great example of how testing and TDD became my superpower and helped me develop my career. Being autistic, I'm naturally anxious and have a hard time dealing with uncertainty (and there aren't a lot of things that are more imposing than an AWS product I'm unfamiliar with). TDD is a great enabler here, helping me chuck down the big problem into smaller, tangible goals, so that I don't have to try and cram the whole thing into my mind. This is what allowed me to tackle any problem, in any domain I encountered, in programming languages I've never used, with dependencies that were completely new to me. All I needed to do was pick a well-defined expectation or requirement, write a test for it, and make it pass. Whenever a test passes, I have concrete proof that I made progress, and I have a check point I can always come back to if I get lost along the way. I know there's nothing I can do that will break things beyond repair, and this knowledge is, indeed, a power.