What if there was a way to deploy a new feature into production — and not actually turn it on until you’re ready?
There is! These tools are called feature flags (or feature toggles or flippers, depending on whom you ask). Feature flags are a powerful way to fine-tune your control over which features are enabled within a software deployment.
Of course, feature flags aren’t the right solution in all cases. You shouldn’t start using them haphazardly, or you may end up with a delivery pipeline more complicated than it needs to be. Read on for a definition of feature flags, an explanation of their benefits, and tips for deciding when and when not to use feature flags.
(For more dev resources, check out these DevOps conferences & the State of DevOps today.)
A feature flag is a switch you can use to turn a feature on or off within an application after the application has been deployed to production.
The last part of the previous sentence is the key to what makes feature flags so powerful — they let you control application behavior post-deployment. This makes them different from a strategy where you release different versions of your application in order to control which features are available.
With feature flags, you can deploy your application once and maintain only one version of your source code while still retaining the ability to toggle certain features on or off when the application runs. That means you can deploy the same version of the application to different environments (such as testing and production) but enable different features in each environment.
In most cases, feature flags are implemented as conditional statements directly inside source code. The statements check whether a given external condition (such as the existence of certain values inside a configuration file) is true before they execute the code. If the condition isn’t true, the code is ignored.
Simple feature flags are easy enough to implement yourself by writing your own conditional statements and control files. Commercial frameworks like LaunchDarkly and Feature Management from CloudBees are also available and may be helpful if you…
Because feature flags must be implemented within source code, they require some upfront planning and work. They’re not something you can implement once your application has already been deployed (well, you can, but you’d have to redeploy a new version of the application first). Still, the effort that feature flags require is often less than it would take to maintain separate branches of your source code for each set of features that you want to turn on or off – helping you maintain higher application quality.
There are a variety of beneficial reasons to consider using feature flags. Teams will have different reasons for adopting feature flags and different methods for how they implement them, of course, but u can expect the following benefits.
One of the main benefits of feature flags is that they let you turn certain features in your application on or off without having to redeploy the application or maintain and deploy multiple branches of source code.
Using feature flags makes it easier to not only control the way applications behave in different environments but to also test new features. You can implement new features in your source code but use feature flags in such a way that features are turned on only in development or testing environments. That way, you can test the features in a test environment, then turn them on in production once they’re ready, without having to redeploy.
This sure beats the conventional approach of deploying a new feature into testing, doing your tests and then deploying the features again into production.
Another benefit of feature flags: they can reduce the number of deploys required to get new features into production. Although CI/CD is all about enabling the continuous deployment of new features, in some cases, it isn’t desirable to deploy new code hourly, or more frequently. Deployments take time and effort (even if they’re automated, someone has to watch them and make sure they proceed as expected). And, deployments carry risk (the more deployments you do, the greater the chance something goes wrong with at least one of them).
So, it can make sense to try to limit the number of deployments you do per day.
With feature flags, you can deploy new versions of your code with less frequency (perhaps once a day or even once a week) and then turn on the new features as you’re ready. You no longer have to:
Feature flags are especially beneficial if you’re working with a monolithic codebase.
Traditionally, releasing a new feature for a monolith meant doing a major new deployment, even if only a small part of the application actually changed. Small changes meant you’d have to redeploy the whole application every time you wanted to add a feature. (Part of the point of microservices is the way they allow you to redeploy only individual microservices instead of the whole app when you want to change something).
With feature flags, however, you can add multiple features to your monolith, redeploy it and turn on the features when you’re ready.
Finally, feature flags can be helpful for addressing a common pain-point among developers of production apps: the pressure to align feature release cycles with marketing campaigns.
Marketers usually want to know when an important new feature is going to be released so they can promote it. But, as a developer, it can be hard to guarantee that a feature will be ready for deployment according to a strict schedule. Sometimes you run into delays. Sometimes you finish the feature earlier than you expected.
With feature flags, it’s easier to keep marketing and development’s schedules in sync. Developers can simply release new features but keep them disabled via feature flags until marketers are ready to promote them.
Despite all their benefits, feature flags are not the right solution for every situation. The following are some considerations that might lead you to decide not to use feature flags, or to limit your use of them.
Whenever you implement a feature flag, you’re taking a risk of the flag being turned on by accident, potentially enabling a feature you’re not yet ready to use in a certain environment. It’s one thing if you accidentally turn on a feature that causes usability or performance issues. It’s another if you turn on a feature that has security implications — but it hasn’t yet been properly tested.
For this reason, it’s a best practice to avoid using feature flags to control features that are sensitive from a security perspective. If your feature deals with authentication or decryption of sensitive data, for example, it’s safer to wait until it has been fully tested before you even deploy the code. Deploying it and assuming you can keep it turned off with a feature flag until you’re ready to use it is just too risky.
In modern application deployment architectures, it’s common to control feature flags using configuration data hosted on the same server that hosts the application. Or, if the application runs locally, by having the application check in with a remote resource, such as a configuration file hosted in the cloud.
If your application architecture allows you to do this, great. You’ll retain full control over the feature flags, whether they’re turned on or not.
But, if you’re deploying your application in such a way that feature flags can only be controlled using a file that you as the developer don’t own, they can be risky. The most common scenario where this would be an issue is if you have an application that can’t connect to the network, maybe because…
In this scenario, your only option for controlling feature flags is to use a configuration file that’s pushed out with the application itself and is hosted locally on the user’s device. That’s risky because it means the user could potentially enable a feature that shouldn’t be enabled.
The bottom line: If you can’t implement feature flags in such a way that the development team retains the sole ability to turn them on or off, don’t use them.
Every time you add a feature flag to your application, you create something else you have to maintain.
You’ll need to keep the feature flag control updated by ensuring it’s properly configured to turn the feature on or off as desired under different circumstances. Plus, unless you like the thrill of undocumented features, you‘ll need to keep the feature flag documented somewhere if you want to have any hope of keeping track of its status as your application evolves.
The only way to free yourself of this responsibility is to remove the feature flag within the source code and then turn the feature on directly within the application (and then update your documentation).
In this sense, feature flags become a form of technical debt. You must keep maintaining them until you put in the effort required to update your source code and “pay off” the debt.
This doesn’t mean feature flags are an inherently bad thing. But it does mean you should use them strategically, and consider how many feature flags you are already maintaining before you decide to add a new one. If you try to control every feature with a feature flag, you’ll end up drowning in configurations and documentation.
Of all the things within an application that can consume resources, feature flags are usually not a big concern. After all, they’re basically just “if” statements, and it doesn’t take a whole lot of CPU cycles to run one of those.
Still, feature flags add some overhead to your application — not just from CPU utilization, but also from I/O, which could be the bigger bottleneck if your app has to look through complex file directories to access the flag for each feature.
If you’re developing an application where every millisecond matters, you may want to think about the impact of feature flags before you decide to use them. A server-side web app that takes a few extra milliseconds to start every instance because it has to check feature flags could create a large overall performance hit if it’s running thousands of instances per hour.
Keep in mind, too, that an application that includes a lot of feature flags may suffer a serious performance delay due to network latency if it can’t initialize until it has pulled the flag configuration data from over the network.
Feature flags are a great way to streamline the testing of new features and reduce the frequency of your deployments without compromising your ability to exert fine-grained control over when new features become available within an application. However, before you go turning everything into a feature flag, make sure you are aware of their drawbacks and limitations.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.