When building new applications, it's common for developers to focus more on the functions their application sill have, the technology stack used, etc., rather than how the application will be hosted. After all, just getting the thing to run on your local machine is all a developer needs to do in order to code. If you find yourself in this position, writing an application but aren't quite sure about where or how it'll be hosted, follow these tips. They certainly won't hurt, and will make your application more versatile and easy to deploy should you use a container technology such as Swarm or Kubernetes.
Tip 1: Use Environment Variables for Configuration
Different applications and technology stacks have numerous choices for handling configuration. Whichever library you choose, make sure it allows configuration to be passed in via environment variables in a way that's transparent to your application. You don't want code all over the place reading both files AND environment variables. Pick libraries that will allow your application to read configuration values the same way, whether they were in a file or passed in as environment variables.
Another variant of this tip is to allow the application to read its configuration from parametrized location. So instead of passing in 10+ environment variables, you can pass in a single parameter that specifies where the application should read its configuration from.
Why?
Users cannot change configuration files in a container easily. If you package up your application in a container image, the configuration files are part of that image. By allowing values to be passed in via environment variables, you can use the same image in different environments and have the differing values passed in as an environment variable.
If you go with the approach of allowing the configuration file location to be passed in, you can use your container orchestration tool to write your environment specific configuration to a volume and mount it somewhere in the container, passing in the location of the mount path as a parameter to your application.
Tip 2: Allow Sensitive Configuration to be Passed in Separately
Your application most likely has database passwords or other information that you don't want checked into source control. The application needs a way to receive this information in such a way that it doesn't have to be stored with your non-sensitive configuration data. The solutions for this can be the same solutions described in Tip 1. If your configuration library of choice allows certain configuration options to be passed in via environment variable, you can simply pass in the sensitive configuration via this mechanism.
One example of this is public-key cryptography, or asymmetric cryptography. If your application requires private keys, how are those keys going to be made available to the container? You shouldn't store them in the image for security reasons. By allowing the entire key to be passed in as an environment variable, or the location of the private key on a mounted container volume, you are allowing the keys to be kept safe in the container orchestration engine and passed in when the container starts up.
Why?
The reasoning for this tip has more to do with good deployment practices than just being container friendly. However, how you allow this can make a difference in how container friendly the application is. Most container orchestration tools allow for sensitive information to be stored separately from non-sensitive configuration data. By allow your application to receive this information via command line or environment variable, makes it much easier to deploy it in a containerized environment.
Tip 3: Parameterize File System Paths
Some applications require persistent storage on the local file system. If your application falls into this category and needs data to be available after a reboot, don't assume (for example) that you can just write data to ./data. Allow the file system path to be a configuration value so it can be passed into your application (using the mechanisms described in Tip 1).
Why?
Storage in a container is ephemeral. Whatever data you write on the container's file system, is lost when the container stops running. By allowing the file system path to be passed in, you can mount a storage volume to a path within the container that doesn't go away when the container stops. If you always assume the storage path is along side your application files, you may run into problems when trying to deploy it in a production environment.
Tip 4: Allow Logging Only to Stdout & Stderr
Different technology stacks have different libraries used for logging. Whichever you chose (hopefully you're using one), make sure it allows for console only logging and doesn't force you to log to files on disk.
Why?
Most container orchestration platforms expect containers to write log data in the stdout and stderr streams and have mechanisms to obtain what was written even after the container stops running. If your application writes only to log files, you will have more work to do in order to access the log files after the container has stopped running. While it's not a requirement to stop logging to files, if you're already able to log do the stdout and stderr streams, why have log files?
Tip 5: Design for Effortless Horizontal Scaling
It's usually always good practice to design your application so they can be run with 1 instance, or 20 instances. Being able to add more application instances to improve scale is called Horizontal Scaling. The methods for achieving this are beyond the scope of this post, but keeping your application stateless, limiting most (if not all) needed coordination among instances, worker queues, etc. are all ways you can allow your application to scale horizontally.
Why?
While this can't always be achieved, being able to scale horizontally makes your application much more container friendly because container orchestration platforms make scaling horizontally as easy as clicking a button. If your application requires special handling of application configuration in order to add more instances, you are not allowing administrators to take full advantage of their container orchestration platform.
Tags: Cloud
, Containers
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment