Don't fall behind, leverage AI coding tools to the max by improving the way you test
"Jan's courses are a beacon of clarity, presenting complex concepts in an easily digestible format that doesn't neglect the depth needed to build robust software. It's rare to find educational material that strikes such a perfect balance between accessibility and comprehensive coverage."
Giacomo Miolo
Chief Data Officer @ Volteras
Drive your implementations with tests. Build a safety net out of the box. Deliver solutions faster, with more confidence.
Solve one small piece of a problem at once. Come a step closer to the final solution with every new test.
Avoid updating lots of tests due to signature or implementation changes. Keep tests easy to maintain.
Implement test-doubles to replace I/O code (database, APIs, …). Experiment in production with feature flags.
Ensure simple test setup, resistance to changes in implementation, and readability. Change only what's necessary.
Let Claude Code and Cursor move fast without breaking things. Write tests you and your AI tools can trust.
"Software is of high quality when you can safely change it faster than business can change their mind!"
Get the Complete Python Testing Guide – real-world patterns, AI-ready workflows, and tests you can trust.
Get the CourseModern Python testing works best with modern tools. The right tools make testing significantly simpler. For example, FastAPI's dependency injection can be used to make your tests faster while validating the same behavior.
"Passed test must give you confidence things are working.
Failed test must give you confidence things are not working."
"High-quality tests make your nights and weekends peaceful!"
"Test behaviour, not implementation details!"
I worked on codebases where 100% code coverage was required. Codebases where all methods but the tested one had to be mocked. Believe me, I know what it means to update 100 tests for a single line change inside the implementation. I contributed to projects where we were chasing our own tail – one bug was fixed, and three new bugs were added to production. I was in setups where we just restarted failed test jobs and hoped that they'd pass the next time. I know what it means to have automated tests that work against you. You lose time writing, executing, and maintaining them. Despite that, bugs keep popping up everywhere. I've also seen what happens when AI tools write code without solid tests to guide them – fast output, but nothing you can trust. So believe me when I say: I know the automated tests struggle.
Despite all these experiences, I never gave up on automated testing. Instead, I decided to experiment as long as needed – until I found the way I'd been reading about. The way that allows you to ship to production whenever tests pass. The way where tests work for you, not against you. At this point in my career, I can say I found it. I'm part of the team that ships to production ten times per day. I run Claude Code with multiple agents at the same time, each working on different parts of the codebase – and because the tests are solid, I trust what they produce. Deployment is a non-event. If pipelines are green, we ship to production.
I've read quite a few books about software testing. I followed tutorials and courses. One thing that was always missing was – "How do I apply this to a real-world project?". Examples were usually very trivial and far from real-world problems (e.g., business logic was implemented, but there was no database interaction). So I had to figure things out on my own. I'm not saying they didn't help – they helped me a lot. But I just couldn't find a way to apply those ideas in my day-to-day work.
Over the past years, I helped quite a few developers set up their projects, improve Python testing skills, set up AWS environments, etc. Over and over again, I had to help them with testing. Many times, code design didn't allow effective testing. And lately, AI tools were making things worse – generating code fast, but without proper tests to catch mistakes, they created more problems than they solved. I did help, but many times, a major rewrite would be needed to simplify testing. So I decided to write the Complete Python Testing Guide. To show developers how to build applications using real-world examples (e.g., syncing users to CRM). So the next time they need to test something, they can just reference this guide. (That's what I've been doing with all my courses in my day-to-day job for years.)
Practical testing advice and new articles delivered to your inbox.