Amazon's AWS has done a huge service to the web community by providing a huge (and oft overwhelming) tool set for building applications. AppSync is Amazon's answer to a GraphQL server implementation that allows you the flexibility of choosing your data sources to hook up your resolvers with fairly easy-to-use methods through the AWS Console. The drawback here is when working on a team, having more hands on the wheel of the console can make for a bit of a messβplus it's not as maintainable as managing code locally. That's where Serverless comes in!
Serverless is an incredible CLI that allows you to describe what you need from your AWS stack in a YAML file... and then Serverless handles the rest. When you deploy, it boils your config into a CloudFormation template and ships it off to AWS to have your stack created/updated. It handles connecting it all together and everything. It truly feels like magic at times! This includes creating Lambdas and packaging/bundling them up to be uploaded to S3 and deployed. There's even a wide-array of plugins for bundling (Webpack, Parcel, and an unfortunately un-maintained Rollup plugin) so you can make your Lambdas teeny-tiny with ease so they warm up and fire real quick without much cost to you.
There's another nifty plugin to allow AppSync integration, serverless-appsync-plugin. Like vanilla-Serverless, the plugin allows you to describe your needs from AppSync. AppSync has a bit more overhead in terms of creating resolvers. Let's talk about the process a bit:
AppSync uses Apache Velocity Templates to resolve your GraphQL fields. Basically, when it receives a GQL request, it'll look at the field of that request, and then look in your config for a Request Mapping Template entry to determine what to do. Those look like this:
- dataSource: MusicHandler
type: Query
field: getTracks
request: 'mapping-templates/getTracks-request.vtl'
response: 'mapping-templates/json-response.vtl'
From there, it'll hit the actual request template to formulate a response. A <field>-request.vtl
would look like this:
{
"version": "2017-02-28",
"operation": "Invoke",
"payload": {
"field": "getTracks",
"arguments": $utils.toJson($context.arguments)
}
}
The payload
object is sent to your dataSource as information you can use to resolve the queries. For example, if your dataSource is defined as a Lambda, your Lambda will have a handler
function with this signature: handler(eventundefined contextundefined callback)
- In that example, payload
=== event
!
Well, focusing entirely on Lambdas here, you can scale independently. With my implementation, we used one-Lambda per service (roughly) which allowed us to scale up resolvers that may need a bit more memory to get the job done and scale down lighter-weight ones.
This inherently encourages Lambda code that is lightweight and pureβand discourages bloat.
serverless.yml
file will stay relatively small regardless. For larger projects, this sucks (though without the offline functionality, development workflows can be sloooooow).Probably not. I'll update as I continue to work with this exciting stack. π