This is Part 3 of the humor-inspired saga on the journey from monoliths to microservices, to serverless, and now incorporating AI agents. If you haven’t read Part 1: Mono’s Journey from Monolith to Microservices and Part 2: Mikro’s Serverless Saga, please do so first.
Zero Trust Security Architecture
Remember when we used to think of security like a medieval castle? High walls, a moat, and guards at the gate. Once you were inside, you were trusted. In the Tech world, that model worked great when our entire tech stack lived in a data center down the hall. Unfortunately, that castle doesn’t exist anymore.
According to IBM, the average cost of a data breach reached $4.9 million, with compromised credentials being the most common initial attack vector. Organizations that still trust their internal networks face a harsh reality: once an attacker gains authenticated access, they often have free rein to move laterally through systems. The philosophical shift from perimeter-based to Zero Trust security isn’t just about technology; it’s about survival in the modern threat landscape.
Microservices vs Monoliths vs Modular Monoliths: A 2025 Decision Framework
The question arises frequently in engineering discussions: “Should we break up our monolith into microservices?” The answer is often surprising: “Probably not” or “It depends”. This isn’t because microservices are bad—they’re not. It’s because the industry has finally moved past the religious wars where you were either team microservices or team monolith, with no middle ground. The reality in 2025 is far more nuanced.
AI Interoperability with MCP (and a Spring MCP Server example)
Model Context Protocol (MCP) is a structured, interoperable standard that enables AI agents to query, invoke, and respond to external APIs or services. Think of MCP as a universal translator that allows Large Language Models (LLMs) like Claude to seamlessly connect with databases, APIs, file systems, and other services through a standardized interface.
At its core, MCP solves a critical AI development problem: the fragmented integrations landscape. Before MCP, each AI application required custom connectors and bespoke integrations for every external service it needed to access. MCP standardizes this process through a client-server architecture where AI applications act as MCP clients and external services expose themselves through MCP servers.
Unlocking the Power of Multi-Agent AI with CrewAI
Artificial Intelligence (AI) has evolved rapidly over the last few years. From single-task large language models (LLMs) to entire systems of autonomous agents, the AI ecosystem is now enabling new classes of intelligent workflows. In this blog post, we’ll build a multi-agent AI assistant that takes in a resume profile, a resume document, and a job description link, then produces a tailored resume and interview questions. We’ll explore how to do this using CrewAI, a Python-based multi-agent framework, and run it against both local models via OLLAMA and remote LLMs like OpenAI’s API.
The AI Revolution in Software Engineering: How Senior Leaders Can Drive Strategic Gains
As AI rapidly transforms industries worldwide, software engineering is no exception. Once experimental, AI-driven development tools have become mainstream, promising productivity gains, enhanced quality, and a shift in the role of developers. Senior engineering leaders who leverage AI strategically can expect profound impacts, from shortened development cycles to improved developer experience and better product-market alignment.
Integrating Spring AI Framework in Your Java Application
This blog post will integrate the Spring AI framework into a Java application. We’ll use a simple project that includes a ChatService
and a ChatController
to demonstrate using the Spring AI framework to generate text & image responses and horoscopes based on user input.
Mikro’s Serverless Saga: From Microservices to Madness and back
This is part 2 of a humor-inspired take on Monoliths to microservices that I wrote a few years back: https://blogs.justenougharchitecture.com/monos-journey-from-monolith-to-microservices/. If you did not read that, please do so first.
Mikro was serving his consumers as always. He consistently met his promises (SLAs), and his life was good. Suddenly, he felt a stab and excruciating pain. “Damnit, what was that?” he said. To Mikro’s horror, he found himself being sliced and diced into smaller and smaller pieces. “But I thought I was already micro enough!” he wailed as functions were extracted from his very being.
Optimizing Software Engineering with the AWS Well-Architected Framework
Designing and building robust, scalable, and efficient systems is a fundamental requirement in software engineering. The AWS Well-Architected Framework is an essential resource that can significantly aid this process. Although it originates from Amazon Web Services, the principles and best practices it outlines are universally applicable. This blog aims to provide an understanding of the AWS Well-Architected Framework, its core focus areas, and its value to software engineering practices, regardless of whether you use AWS services.
Unlocking the Power of LLMs with LangChain
As an AI and software professional, you’ve likely heard the buzz around large language models (LLMs) like GPT-3, ChatGPT, and their growing capabilities. These powerful models can handle a wide range of natural language tasks, from text generation to question answering. However, effectively leveraging LLMs in your own applications can be a complex challenge. That’s where LangChain comes in.