Skip to main content
How to Orchestrate Data Flows in a Hybrid Cloud with AWS Step Functions | Kranio

When deploying the project, Api Gateway will interpret this file and create this in your AWS console:

To find out the URL of the deployed API, you must go to the Stages menu.  The stage is a state of your API at a given moment (tip: you can have as many stages as you want with different versions of your API).  Here you can indicate an abbreviation for the environment you are working in (dev, qa, prd), you can indicate the version of the API you are making (v1, v2), or indicate that it corresponds to a test version (test).

In the Api Gateway console, we indicated that we would do a deploy with the stage name “dev”, so when you go to Stage you will see something like this:

You can find out the URL of each endpoint by clicking on the listed names.  This is how the create_client endpoint looks:

Infrastructure

Here we will create the relational database and the Event Bridge event bus. 

For now, the DB will be in the AWS cloud, but it could be a database in your own data center or in another cloud. 

The Event Bridge event bus allows us to communicate two isolated components that can even be in different architectures.  Learn more about this service here

This repository is smaller than the previous one, as it only declares 2 resources.

Serverless.yml

# serverless.yml
service: webinar-iass
 
provider:
 name: aws
 runtime: nodejs12.x
 stage: ${opt:stage, 'dev'}
 region: {YOUR-REGION-HERE}
 versionFunctions: false
 deploymentBucket:
   # here you must put the name of an S3 bucket you determine for deployment. Example: my_project_serverless_s3
   # if you put the same in each serverless file, your deploy per project will be organized and within the same bucket.
   name: kranio-webinar
 
resources:
 - Resources:
     WebinarMariaDB:
       Type: AWS::RDS::DBInstance
       Properties:
         DBName: WebinarMariaDB
         AllocatedStorage: '5'
         Engine: MariaDB
         DBInstanceClass: db.t2.micro
         # here goes the database user and password. self:provider.stage makes it take the environment parameter from the deploy
         # and opens the file with that name (e.g. dev.yml). This way you can parameterize your values.
         MasterUsername: ${file(${self:provider.stage}.yml):db_user}
         MasterUserPassword: ${file(${self:provider.stage}.yml):db_pass}
       DeletionPolicy: Snapshot
 
     # here we declare the Event Bridge event bus. To write here.
 
     WebinarEventBus:
       Type: AWS::Events::EventBus
       Properties:
         Name: WebinarEventBus


You need to create the following tables in your DB.  You can guide yourself with these scripts for the database here

                                                                   
CLIENTS
nameVARCHAR(25)
lastnameVARCHAR(25)
rutVARCHAR(25)
mailVARCHAR(25)
phoneVARCHAR(25)


                             
PARTNERS
rutVARCHAR(25)
storeVARCHAR(25)
                             
BENEFIT
rutVARCHAR(25)
wantsBenefitBOOL

With these steps, we conclude the creation of the infrastructure.

Step Functions

Each “step” of the State Machine we will create is a Lambda function.  Unlike the Lambdas I mentioned in the Api Gateway section that have the role of writing to the DB, these make requests to the Api Gateway endpoints.

According to the architecture indicated above based on a sequential flow, the State Machine should have these steps:

  1. Receive data from a source (e.g. web form) through the Event Bridge event.
  2. Take the event data, build a payload with the name, last name, phone, branch, and rut to the create_client endpoint so the backend writes it to the CLIENTS table
  3. Take the event data, build a payload with the rut and branch to the create_partner endpoint so the backend writes it to the PARTNERS table
  4. Take the event data, build a payload with the rut and wantsBenefit to the create_benefit endpoint so the backend writes it to the BENEFITS table
  5. You can create an additional Lambda where the flow arrives in case of an execution error (example: the endpoint is down).  In this project, it is called catch_errors.

Therefore, one Lambda is made per action of each step.  

Function: receive_lead

Role: receives the Event Bridge event.  It cleans it and passes it to the next Step.  This step is important because when an event is received, it arrives in a JSON document with attributes defined by Amazon, and the event content (the form JSON) is nested inside an attribute called “detail”.  

When a source sends you an event through Event Bridge, the payload looks like this:

{
  "version": "0",
  "id": "c66caab7-10f8-d6e9-fc4e-2b92021ce7ed",
  "detail-type": "string",
  "source": "kranio.event.crm",
  "account": "{your account number}",
  "time": "2021-01-26T21:52:58Z",
  "region": "{your region}",
  "resources": [],
  "detail": {
	"name": "Camila",
	"lastname": "Saavedra",
	"rut": "9891283-0",
	"phone": 56983747263,
	"mail": "csaavedra@micorreo.com",
	"store": "est_central",
	"wantsBenefit": false,
	"origin": "app"
  }
}

We can define a Lambda that returns the content of “detail” to the next function, as in the following example:   

def handler(message, context):
   print('Receiving event...')
   return message["detail"]

Function: create_client

Role: Receives in message the content from the previous step's Lambda.  Takes the content and passes it as an argument to the CRMService class instance.  

def handler(message, context):
   try:
       crm_service = CRMService(message)
       crm_service.create_client()
       return message
   except Exception as e:
       print ("Exception: ", e)
       return e

 

In the CRMService class, we declare the methods that will make the requests according to the endpoint. In this example, the request is to the create_client endpoint.  For API calls, the Python Requests library was used:

class CRMService:
   def __init__(self, payload):
       self.payload = payload
 
   def create_client(self):
       try:
           r = requests.post(url=URL+CREATE_CLIENT, data=json.dumps(self.payload))
# if the response code is different from 200 you can return an exception.
           if r.status_code != 200:
               return Exception (r.text)
           return json.loads(r.text)
       except Exception as e:
           return e

The Lambda functions for create_partner and create_benefit are similar to create_client, with the difference that they call the corresponding endpoints.  You can review case by case in this part of the repository.

Function: catch_error.py

Role: takes the errors that occur and can return them to diagnose what might have happened. It is a Lambda function like any other, so it also has a handler, context, and returns a json.

def handler(message, context):
   exception_message = {
       "success":False,
       "description": "an exception occurred in the process. check log",
       "message": message
    }
   return exception_message

Then we declare the Serverless.yml of this project 

service: webinar-step-functions
 
# frameworkVersion: '2.3.0'
provider:
 name: aws
 runtime: python3.7
 stage: ${opt:stage, 'dev'}
 region: {YOUR-REGION-HERE}
 prefix: ${self:service}-${self:provider.stage}
 versionFunctions: false
 deploymentBucket:
   # here you must put the name of an S3 bucket you determine for deployment. Example: my_project_serverless_s3
   # if you put the same in each serverless file, your deploy per project will be organized and within the same bucket.
   name: kranio-webinar
 
package:
 excludeDevDependencies: false
 
custom:
 prefix: '${self:service}-${self:provider.stage}'
 # arn_prefix is a string that would be repeated many times if not parameterized. in dev.yml you can see what it contains.
 # according to your environment variables, you should make a document if necessary.
 arn_prefix: '${file(${self:provider.stage}.yml):arn_prefix}'
 defaultErrorHandler:
   ErrorEquals: ["States.ALL"]
   Next: CatchError
# declaration of all lambda functions
functions:
 receiveLead:
   handler: functions/receive_lead.handler 
 createClient:
   handler: functions/create_client.handler   
 createPartner:
   handler: functions/create_partner.handler
 createBenefit:
   handler: functions/create_benefit.handler
 catchErrors:
   handler: functions/catch_errors.handler
 
# the step functions
stepFunctions:
 stateMachines:
   CreateClientCRM:
     # here we indicate that the state machine starts when
     # the event eventBridge occurs that has that eventBusName and
     # specific source
     name: CreateClientCRM
     events:
       - eventBridge:
           eventBusName: WebinarEventBus
           event:
             source:
               - "kranio.event.crm"
     definition:
       # here we indicate that the state machine starts with this step.
       Comment: starts client registration process
       StartAt: ReceiveLead
       States:
         # now we indicate the states.
         # type indicates that these steps are tasks.
         # resource indicates the arn of the lambda function that
         # must be executed in this step
         # next indicates the next step
         # catch indicates which function we call if an error occurs, in this case, catch_error.
         ReceiveLead:
           Type: Task
           Resource: '${self:custom.arn_prefix}-receiveLead'
           Next: CreateClient
           Catch:
             - ${self:custom.defaultErrorHandler}
         CreateClient:
           Type: Task
           Resource: '${self:custom.arn_prefix}-createClient'
           Next: CreatePartner
           Catch:
             - ${self:custom.defaultErrorHandler}
         CreatePartner:
           Type: Task
           Resource: '${self:custom.arn_prefix}-createPartner'
           Next: CreateBenefit
           Catch:
             - ${self:custom.defaultErrorHandler}
         CreateBenefit:
           Type: Task
           Resource: '${self:custom.arn_prefix}-createBenefit'
           Catch:
             - ${self:custom.defaultErrorHandler}
           End: true
         CatchError:
           Type: Task
           Resource: '${self:custom.arn_prefix}-catchErrors'
           End: true

Now we have the Lambda functions for each step of the State Machine, we have the API that writes to the DB, and we have the exposed endpoint to make requests.

Sending a message to the Event Bridge event bus

For all this to start interacting, it is necessary to send the event that initializes the flow.

Assuming you are working in your company's CRM, and you are getting the initial data from a web form, the way to write to the event bus that will initialize the flow is through the AWS SDK.  Learn the languages it is available for here

If you are working with Python, the way to send the form would be this:

client = boto3.client('events')
# running this script sends an event to eventbridge
# Source and EventBusName must match what you declare in serverless.yml
 
response = client.put_events(
   Entries=[
       {
           'Time': datetime.now(),
# event source
           'Source': 'kranio.event.crm',
           'Resources': [
               'string',
           ],
           'DetailType': 'CRM Registration ',
# payload is the form data.
           'Detail': json.dumps(payload),
           'EventBusName': 'WebinarEventBus'
       },
   ]
)

Having configured everything correctly, you must go to the Step Functions service in your AWS console and see the list of sent events:

If you choose the last event, you will see the execution sequence of the Step Functions and the details of their inputs and outputs.:

Where choosing the ReceiveLead step, the input corresponds to the payload sent as an event via Event Bridge.

The real test

If you access your database (either with the terminal client or with an intuitive visual client) you will see that each data is in each of the corresponding tables.

Conclusions

Step Functions is a very powerful service if what you need is to automate a sequential flow of actions.  We have now created a simple example, but it is highly scalable.  Also, working with Step Functions is an invitation to decouple the requirements of the solution you need to implement, which makes it easier to identify failure points.

This type of orchestration is totally serverless, therefore, it is much more economical than developing an application that runs on a server just to fulfill this role.

It is an excellent way to experiment with a hybrid cloud, reusing and integrating applications from your data center, and interacting with cloud services.

Ready to optimize your data flows in hybrid environments?

At Kranio, we have experts in system integration and serverless solutions who will help you orchestrate efficient processes between your on-premise systems and the cloud. Contact us and discover how we can drive your company's digital transformation.

Previous Posts

Kraneating is also about protection: the process behind our ISO 27001 certification

Kraneating is also about protection: the process behind our ISO 27001 certification

At the end of 2025, Kranio achieved ISO 27001 certification after implementing its Information Security Management System (ISMS). This process was not merely a compliance exercise but a strategic decision to strengthen how we design, build, and operate digital systems. In this article, we share the process, the internal changes it entailed, and the impact it has for our clients: greater control, structured risk management, and a stronger foundation to confidently scale systems.

Development Standards: The Invisible Operating System That Enables Scaling Without Burning Out the Team

Development Standards: The Invisible Operating System That Enables Scaling Without Burning Out the Team

Discover how development standards reduce bugs, accelerate onboarding, and enable engineering teams to scale without creating friction.