What We Talk About When We Talk About Microservices
Microservices is a term we are hearing more and more in the world of software development and we are all eager to embrace this new, exciting way of looking at things. Unfortunately, our eagerness often leads to variations in style and approach that create confusion and inconsistency. This has left us without a clear definition of what a microservice is. Each company or group of developers uses the label in a slightly different way. In general, it is commonly agreed that a microservice is a grouping of related pieces of an application that can be bundled together. How that grouping is done is where the definition becomes unclear. In some cases the grouping is by the functions or features the application provides to meet the users business needs. For others, they are grouped along more technical lines; dividing the database, screens and interfaces each into their own microservices. We have seen everything from a single interface to what could be an entire subsystem of a large application with the microservice label attached to it.
This creates some unusual challenges for those of us who provide software metric analysis and function point counting. When we hear the word microservice we each apply our own definition to what the word means and often expect that our definition applies across the board. Because of this we expect that a function point count will handle all microservices in the same way, which is simply not true. Function Point Analysis (FPA) is at its simplest a measurement of size. The size of any application is independent from SDLC, platform, architecture and the coding languages that are used to develop it. Meaning these things have no impact on the function point count. So as we apply FPA to this growing development style we have to look past the microservices label to what functionality is being delivered.
Decomposition of a Monolithic Application
The most common use of microservices that we see is the breakdown of an existing legacy/monolithic application. This is clearly going to be a very high profile project that management will want to watch closely. As with all high profile projects this means there will be budget, time, and productivity goals that will need to be monitored.
It helps to focus on what, exactly, we are trying to capture with our metrics and measurements. At its core this type of transformation is an architectural platform change. The typical focus is on seeing an improvement from the old way of doing things to the new way. Indeed, the whole reason for the transformation is that the new way should be more efficient, giving us the expectation that productivity should go up bringing effort and costs down. From this perspective, measuring the transformation process is not the goal. The goal is to compare the old version of product to the new version of product. By measuring the transformation project and holding it to old productivity standards we may influence the way the transformation happens, causing development to focus on meeting current FP/ productivity goals instead of process improvement. Once the transformation is complete the application boundaries should be reassessed and new baseline count(s) should be done.
For those familiar with function point counting, this process may raise a few questions…
Q: If an existing monolithic application has been broken down into more than one application, will this make the function point counts higher?
Assuming that these applications will have some types of shared data and communication between them, yes, it is true that the sum of the parts may not equal the whole.
Q: If the boundary of our application remains the same and we still have one application will the function point counts be the same?
As with any transformation project there are many things going on at once. The process may discover functions that are no longer needed and were removed. Likewise, new features may have been added.
There are many things that could cause a shift in the function point count from the old version of the baseline to the new. Assuming that all of the potential changes from old to new were closely tracked and documented, we could potentially track the changes to understand their impact. The percentage of change these variations represent and the potential activates needed to track them should be discussed and taken into consideration when looking at the expected impact to the metrics.
Slow Transition – Creating New Features as Microservices
When looking at a slower transition in to the microservices realm, things become a bit harder to measure. We again need to focus on what we are measuring and what the expected impacts are to budget, time, and productivity goals. Because we do not have a clear dividing line of before and after transformation, the likely expectation is to hold on to the existing metric goals, but this may not be a realistic expectation depending on how the microservices are being designed. A boundary analysis done for each new microservice can be inefficient. It may take many releases, possibly years, to have enough content to do a proper analysis. This requires the counter to monitor microservice activity until there is enough content, leaving everything in the same application boundary until that point is reached.
The scenario where each new interface or business function of the application is considered a separate microservice is the least impactful, as it is simply a new way of doing things. The boundary of the application would not change and there should be no major changes to function point counts and metrics.
However, if each new microservice is a redesign of existing functions and features, things are a bit more complicated. There are a wide variety of things that can happen at once in this case. We have to look at what types of changes will be made for the project. Are the changes technically focused on the transformation to a microservice? Business-focused on changes to the functions and functions provided? Or both? Will the microservice completely replace the existing functionality or will two versions of the code need to exist (i.e. the new version and a legacy version for older apps)? Each of these scenarios will have unique impacts to both function point counts and metrics. For example, if an application is replacing an existing feature with a microservice and must also continue to maintain the old version for legacy applications with the data and functionality the same in both versions, there would be no impact to the function point count because the change is technical in nature for a new platform. However, it would be logical to see a dip in productivity metrics as the team will need to maintain two sets of code. It is important to understand and discuss these potential impacts before the project starts so that everyone involved has clear expectations.