The use of Serverless execution models is expanding extremely rapidly and cloud providers are continuing to enhance their platforms. Per Flexera’s “State of the Cloud” report:
"Year over year, serverless was the top-growing extended cloud service for the second year in a row, with a 50 percent increase over 2018 (24 to 36 percent adoption)."
Leading this trend for the last two years, Amazon has released a few features that address AWS Lambdas’ pain points and make them a more feasible choice for large scale deployments consisting of numerous applications. “Provisioned concurrency” (which helps to avoid cold start issues) raises most excitement, but there are also “Lambda layers” which make it easier to share code and other dependencies between different functions in your solution - or even between AWS accounts.
So, to what extent do the Lambda layers improve user’s experience with AWS Lambdas?
In short, AWS Lambda layer is an artifact that can be published to the AWS account. The artifact can later be added to a Lambda function. The contents of the artifact need to have a particular structure (which varies between runtimes) and are unpacked to a directory accessible to the Lambda code. You can use the AWS Lambda Layer to provide your Lambda function with data or required dependencies.
AWS Lambda Console displaying a sample Lambda function with a layer assigned
Sharing dependencies - both externally and internally - seems like an important use-case for AWS Lambda Layers. For example, AWS itself publishes a publicly available layer containing NumPy and SciPy, two popular python scientific packages. A python developer who wishes to experiment with AWS Lambda and use SciPy in the code does not need to install the library and upload the package to AWS - they can just write code in the online editor and attach the public layer.
Adding Python SciPy layer in AWS Console
This capability enables rapid prototyping in AWS console, without the necessity to create a deployment package with the function, especially for scripting languages: python, nodejs. ruby.
In case of compiled (or transpiled) languages layers with dependencies are trickier to use: in order to build the artifact (.jar, .dll) locally, the dependency needs to be available in the developer’s system at the build time anyway. There is no way to work on the code in AWS Console’s online editor. If one wishes to comfortably work with a larger project in python, node.js or ruby, AWS Console’s online editor is not practical either and the dependency needs to be downloaded anyway to permit reasonable development experience in the local environment.
In such case, the usefulness of the layer seems to be limited to size reduction of the deployment artifact and to the ease of sharing common versions of dependencies between a number of functions.
In summary, as an AWS Lambda user, following may motivate you to use a layer:
As a library maintainer, to provide AWS Lambda users with mentioned gains (in respect to your library), you may decide to publish it as a layer. You may also want to do so if you want to share some common pieces of code or data between a number of AWS Lambdas deployed to one or many AWS accounts.
The basic way to publish a layer is to use the AWS CLI and publish the required artifact directly to AWS:
aws lambda publish-layer-version --layer-name <layer-name> --description <description> --license-info <license-info> --zip-file fileb://<file-path> --compatible-runtimes <compatible-runtimes> --region <region>
For example:
aws lambda publish-layer-version --layer-name SampleLayer --description “Sample python layer” --license-info "Apache-2.0" --zip-file fileb://layer.zip --compatible-runtimes “python3.7 python3.8” --region eu-central-1
The main limitation of this method is that a published layer is available only in the selected region, which means that if you want users to use the published layer globally, you need to publish in each region.
If you wish to publish from the S3 bucket, the bucket needs to be in the same region as the layer, which again forces you to create one bucket per region.
Thus the viable alternative may be publishing layer as a SAM (Serverless Application Model) template which, upon deployment, will deploy a copy of a layer to a user’s account.
For SAM applications to be publicly available in all regions, it needs to be originally published to us-east-1 or us-east-2. Privately shared applications are only available in a region in which they were created.
The worst caveat of this approach is the complexity behind the deployment process: for less advanced users it may be counter-intuitive that in order to use your layer, they first need to deploy it from the SAM template as an Application and only then can add to the function. From a user's point of view, the SAM templates are available either from Serverless Application Repository or as one of the choices when creating a Lambda function, neither of which is a particularly obvious place to look for a layer.
The publishing process is a bit different for the .NET runtime which has its own aws-cli extension.
dotnet lambda publish-layer SampleLayer --layer-type runtime-package-store --s3-bucket test-bucket --region eu-central-1
There are two main limitations which severely impact the ease of the use of AWS Lambda Layers:
If it wasn’t for the second point, one could imagine use-cases such as a company-wide maintained version of dependencies which would automatically update to the latest-patched version with no action on part of Lambda functions owners. Right now, a person deploying a Lambda function needs to specify the exact version. If the owner of a layer deletes it, the user will be forced to update only when they edit their function next time. However, Amazon internally makes sure to keep all the referenced versions of Lambda layers, so that no existing referencing functions are broken.
On the other hand, layers do not enforce semantic versioning (i.e. version number consisting of 3 numbers in a format “major.minor.patch”) and it would be difficult to decide which upgrades are safe and can be performed automatically.
If we want to conveniently use the layer locally, and at the same time still benefit from the size reduction of the deployment package, the concept of layers (or at least runtime provided dependencies) must be supported at the level of the chosen build system.
For now, only .NET core has a full support for a convenient workflow supporting lambda layers in its lambda cli: the intention to make a dependency a runtime-provided artifact is communicated with a special layer-type switch:
dotnet lambda publish-layer SampleLayer --layer-type runtime-package-store --s3-bucket test-bucket --region eu-central-1
And then the layer arn is passed in the deploy-function command:
dotnet lambda deploy-function SampleFunction \
--function-layers <LAYER-ARN>
--function-role <IAM-ROLE> \
--environment-variables <KEY>=<VALUE>
--region eu-central-1
It is this command that builds the function artifacts from contents of a current directory. However, what makes a great user experience when experimenting with Lambdas and layers becomes a bit troublesome when it comes to implementing a (Continuous) Delivery pipeline. A traditional delivery pipeline is likely to have a step of storing the latest build artifact in an artifact repository somewhere, and then the deployment is a separate step (manual or automatic). While the creation and storing of the artifact happens under the hood when using .NET aws-cli as well, one has little control over the process and it will likely require a non-standard CD pipeline. Interesting discussion around these concerns can be found here.
It is possible to somehow handle the dependencies in layers in other runtimes too: in Java, the exclusion of the dependency from the main artifact can be achieved by using the ‘provided’ dependency scope and in node.js we can artificially exclude the dependency by putting it in the developer dependencies section. For now, the tooling for the rest of the supported runtimes has some catching-up to do.
We provide our SignalFx Lambda wrappers as layers. The wrappers allow for pushing metrics from AWS Lambda functions to SignalFx without relying on AWS Cloudwatch (and for pushing custom metrics and traces). The metrics for all monitored functions or for a selected one can be viewed with the default SignalFx Lambda Dashboards.
Default SignalFx Lambda dashboard for a single function
With the concept of AWS Lambda Layers being relatively new and our wrappers as layers still evolving it is now hard to say if this delivery mechanism will get popular. What we can say right now, is that it is a pretty convenient way to prototype or quickly demo how our Lambda wrappers work, especially in case of scripting languages.
It will be interesting to see if the AWS Lambda Layers take off. There are some ideas and tooling that can help popularize them, such as AWS Lambda container image converter tool (aws-lambda-container-image-converter repo in awslabs github account). A curated and maintained repository of layers with popular dependencies would also be a great resource for people looking to try out Lambdas.
For now, however, the number of use cases is limited, the tooling has not yet fully caught up, and thus Layers may be useful and worth the additional effort only to a small group of users.
Please check out our docs to learn more about how to use SignalFx with AWS Lambda.
----------------------------------------------------
Thanks!
Marta Tatiana Musik
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.