This post was updated Nov 16, 2019 to use up to date Serverless CLI commands, simplify the handler code by removing Express, and simplify the approach handling Heroku Postgres credential rotation.
FaaS (Function as a service) cloud services are often paired with the cloud provider’s proprietary managed database service. Examples include AWS’s Lambda paired with their DynamoDB database, and Google Cloud’s Cloud Functions paired with their Cloud Firestore database. Thankfully, these managed databases include free tiers. However, if you’d rather use an SQL database, you rarely have an option for a managed SQL database with a free tier.
Here’s an approach to connect a Lambda function to Heroku’s managed PostgreSQL service, Heroku Postgres, which offers a free tier. Combined, you’ll get 1M requests and 400,000 GB-seconds of compute time per month from AWS, and 10,000 PostgreSQL rows from Heroku for free.
- Node.js version 10 or higher installed
- AWS SDK installed
- An AWS user account set up with the privileges needed to deploy a Serverless service
- The environment variables
AWS_SECRET_ACCESS_KEYset for an access key for this AWS user account
Want to skip to the finished code? See mattwelke/serverless-managed-db-examples.
Start Building the App
To help with deploying the Lambda function, you can use Serverless Framework. This automates much of the steps needed to deploy a Lambda function, as long as you’ve got the AWS CLI installed and configured. You’ll use Node.js in particular for this tutorial, but the approach to connect to Heroku Postgres will work for any language supported by AWS Lambda. Install the Serverless CLI with the following command:
npm install -g serverless
Run the following command to create the app in the
sls create --template aws-nodejs --path my-service --name my-service
You should see output similar to:
Serverless: Generating boilerplate... Serverless: Generating boilerplate in "/home/matt/my-service" _______ __ | _ .-----.----.--.--.-----.----| .-----.-----.-----. | |___| -__| _| | | -__| _| | -__|__ --|__ --| |____ |_____|__| \___/|_____|__| |__|_____|_____|_____| | | | The Serverless Application Framework | | serverless.com, v1.57.0 -------' Serverless: Successfully generated boilerplate for template: "aws-nodejs"
sls commands in the tutorial will be run from the app directory.
You’ll notice the
serverless.yml file has many comments. You can delete the comments. After doing so, the file contains the following:
In this file:
servicerefers to the name of the Serverless service created, which is a way to group multiple Lambda functions into one logical “service” or “app”.
providerrefers to the cloud provider you’re going to deploy to (because Serverless Framework supports other clouds too such as Google Cloud and Azure).
functionslists each Lambda function in the service.
- Each Lambda function has a name. The function skeleton created by the
sls createcommand is named
- Each function’s
handler.hellobecause the file is
handler.jsand the function is
You’ll be editing
Deploy the service
sls deploy command from the app directory. You should see output similar to:
Serverless: Packaging service... Serverless: Excluding development dependencies... Serverless: Creating Stack... Serverless: Checking Stack create progress... ..... Serverless: Stack create finished... Serverless: Uploading CloudFormation file to S3... Serverless: Uploading artifacts... Serverless: Uploading service .zip file to S3 (387 B)... Serverless: Validating template... Serverless: Updating Stack... Serverless: Checking Stack update progress... ................. Serverless: Stack update finished... Service Information service: my-service stage: dev region: us-east-1 stack: my-service-dev api keys: None endpoints: None functions: hello: my-service-dev-hello layers: None
Behind the scenes, the AWS SDK was invoked to manage every AWS service needed to deploy the
hello function. However, because the template only defined a Serverless Framework function and not a Serverless “event” for it, there was no configuration of API Gateway done to set up the function for HTTP access.
serverless.yml to the following:
Then, run the
sls deploy command again and observe that an endpoint will now be available for your function. The deploy command’s output will be similar to:
endpoints: GET - https://lpo949101b.execute-api.us-east-1.amazonaws.com/dev/hello
At this point, you can hit that endpoint in your browser and see a JSON response. If you do, then your setup is working and you can continue with the tutorial. ✔️ If you don’t, and you get an “internal server error” response, check CloudWatch’s logs for a log stream corresponding to your Lambda function to learn why your function didn’t execute properly.
Connect service to Heroku Postgres
You must have a Heroku app created to be associated with the Heroku Postgres service you will use. Create your Heroku app and choose the United States region, which will put your database geographically near the AWS region used to deploy your functions:
You now need to create the Heroku Postgres resource. On your app’s “Resources” tab, under “Add-ons”, type
postgres and when “Heroku Postgres” appears as an option, select it:
In the modal that appears, select “Provision” with the free tier selected:
Click on your “Heroku Postgres” that appears in the list, and you’ll be brought to a screen with statistics such as the number of active connections and rows in the database right now. Click on the “Settings” tab and then the “View Credentials…” button. The credentials will be displayed. Note the following information which you’ll insert into the function source code next to connect to the database:
npm install pg command to install the PostgreSQL client NPM module so that the function can use it. Replace
handler.js with the following code. Replace the credentials in this example (which I’ve since rotated out - don’t worry!) with the credentials for your Heroku Postgres service:
There are a few points to explain in this code:
pgPoolvariable is declared outside of the handler code, so that it is kept around as long as the Lambda function’s “backing container” is kept warm. Lambda will only create a new database connection for a function “cold start”.
- The connection pool’s
maxis set to
1, because each backing container for the Lambda function can only handle one HTTP connection at a time, so it would be wasteful for each backing container to create more than one connection.
setupPgPoolfunction is only called if the
pgPoolvariable is null. This prevents us from wasting time setting up a new database connection for a warm backing container. It’s much more efficient. I observed CloudWatch logs and saw a cold start take ~70ms to run, but warm starts, where a connection was reused, took less than 3 ms to run.
sls deploy command again to deploy these changes, and visit the endpoint in your web browser. You’ll get a response like:
You’ll also notice that there is now a connection open to your Heroku Postgres service (though, of course, no rows or tables yet) if you visit your Heroku dashboard:
This connection is being held open by the one backing container that AWS is using to run your Lambda function right now. When AWS kills the backing container, which usually happens after a few minutes, the number of open connections to Heroku Postgres will return to zero. If a burst of traffic hits your Lambda function’s endpoint, the number of open connections will grow beyond one.
Managing automatic rotation of Heroku Postgres credentials
The only remaining issue is to deal with the fact that every now and then, the Heroku Postgres credentials will change because heroku regularly rotates them. This is by design. A Heroku app will handle this automatically because the
DATABASE_URL environment variable that Heroku provides their apps is updated with a new value and the Heroku app is restarted. Since Heroku can’t automatically restart your Lambda function backing containers, you’ll need to handle this yourself.
There are multiple ways to deal with this problem, but here’s a simple solution, where instead of configuring the Lambda function with Heroku Postgres credentials, you configure it with a Heroku API key capable of fetching the Heroku Postgres credentials. When the Lambda function’s backing container starts, it will perform a request to the Heroku API to fetch the credentials, and then use them to create the connection pool.
You will need a Heroku API key to do this. You can create a token specifically for this app using the Heroku CLI, or you can grab the API Key associated with your account by going into your account settings on the dashboard and noting the “API Key”. You will also need to note the ID of your Heroku Postgres service, available in the Heroku dashboard. For example, my app’s Heroku Postgres service is
npm install axios pg-connection-string command. This adds the “axios” library, which is an easy to use HTTP client that supports promises, and the
pg-connection-string library, which provides an easy way to parse the PostgreSQL connection string that the Heroku API returns into the config object needed for the
pg library to create the connection pool.
handler.js to the following, including your Heroku API key:
Note that creating a PostgreSQL connection pool is now an asynchronous operation, because a request must first be made to the Heroku API, so the
async keyword is added to the
setupPgPool function, and the function is called with
You may notice a bit more cold start time with this change, because creating the database connection pool takes longer, but this shouldn’t be too bad in the long run compared to other factors that can increase Lambda function cold start time such using too many NPM libraries and using an AWS VPC (though that’s changing).
Your web app is ready to go into production once you take care of hiding your Heroku API key in an environment variable so that people seeing your app’s source code won’t gain access to your Heroku account.
Examples and Other Cloud Product Combinations
The completed code for this tutorial can be found in the mattwelke/serverless-managed-db-examples GitHub repository in the
One reason I like the idea of this particular pairing of FaaS product and managed database product is that if you can’t afford an extensive DevOps team, you get so much for your money. AWS and Heroku aren’t paying me to say this - I just think it makes sense. AWS Lambda will scale considerably high before billing kicks in. When you want more than 10,000 PostgreSQL rows, you can get 10,000,000 rows for USD$9/mo. Later, you can bump it up to “4 GB RAM, 64 GB storage, 120 connections” for USD$50/mo. Complete plan details can be found here. Unless you’re running a big data type system, and as long as you’re using object storage products like AWS S3 instead of storing binary data in the database, that’s more than enough to build something great.
Because this pattern extends to all managed database products, other examples with different combinations of FaaS products and databases are possible. For example, another option for a managed SQL database would be connecting to AWS RDS from AWS Lambda. There are also smaller managed SQL database providers out there like jawsDB and Aiven. I don’t endorse these providers, I’m just aware of them.
If a provider allows you to host the managed database in a particular cloud, this also gives you the option to work with other FaaS products like Google Cloud Functions, Azure Functions, or IBM Cloud Functions, because you’ll be able to run the FaaS product and the managed database geographically close to each other to achieve good performance. Check out the other directories in the examples repository to see them. Note that any credentials you see hard-coded in this tutorial or in the examples repository are rotated out and will not work for you.
Cleanup Cloud Services
To delete all the AWS resources created during the tutorial, run the
serverless remove command. To delete the Heroku resources, visit the Heroku dashboard in your web browser and delete the Heroku app you created.