
DynamoDB in 360°: Key Concepts and Best Practices for Modeling NoSQL Data on AWS
Where to start modeling No-SQL data in DynamoDB. Understand what it is, how it works, and how to adapt it to your projects to take advantage of AWS.
A large part of my daily work is done in the cloud, my favorite platform is Amazon Web Services and I have experimented with many of its resources. But one of the ones I use the most is DynamoDB.
And I would like to share with you the considerations that I believe should be taken before getting started.
Well, to start from the most basic, let's see what Dynamo is and what it offers us.
DynamoDB is a NoSQL engine (not a database) owned by Amazon that focuses on performance, since by default it replicates the data of a table in at least 3 availability zones in the region where it is created. This is where the high availability it is known for comes from. If you are not familiar with the concepts, regions, and availability zones, I recommend this documentation: https://aws.amazon.com/es/about-aws/global-infrastructure
The type of replication that Dynamo uses and that comes to solve the problem of data update latency is known as leaderless replication . To learn more about this concept: https://distributed-computing-musings.com/2022/01/replication-introducing-leaderless-replication
This is also known as the data inconsistency problem, in Dynamo we have 2 concepts to talk about consistency:
Eventually Consistent Reads.
Between writing and reading data there will be a slight latency, so querying a recently written data could hit an availability zone that did not have the latest update. The official documentation does not provide an exact delay time, but it is almost imperceptible unless it is - for example - the score of a real-time game.
Strongly Consistent Reads
This consistency is the one that will always respond to queries with the latest updated data, without exception. But… under certain conditions, and that is its disadvantage.
- A strongly consistent read may not be available if there is a delay or network interruption. In this case, DynamoDB may return a server error (HTTP 500).
- Strongly consistent reads are not supported with global secondary indexes.
- Strongly consistent reads use more throughput capacity than eventually consistent reads. This last point leads us to our next concept: read/write capacity.
⚠️ DynamoDB uses Eventually Consistent Reads, unless otherwise specified. Read operations (such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this parameter to true, DynamoDB uses Strongly Consistent Reads.
A common mistake is to use scan instead of query to filter. This should always be avoided whenever possible, since scan has to go through the entire table, when you have few items it is irrelevant, but beware when you are working with hundreds of items.
Among the questions you should always ask yourself before creating a table in DynamoDB are the following:
- What is the data focus?
- What is the average size of the data?
- How often do I need to read the data? Do I need more capacity, read or write? Both equally?
- At what speed do I want to read that data?
- At what speed do I want to write data?
- What kind of Dynamo table do I need? This is easily answered, since there are two options, so are we going to access the data frequently or infrequently?

Regarding the size of the data and what read/write speed/capacity you need. You must understand how capacity units work and how to choose them correctly. This will define how fast you can write and read data from a table.
To define the read and write capacity you will need, you must first define what read/write capacity mode you need.
The mode is what defines how the read/write throughput will be billed.
The available modes are two:
- On Demand: Means that Dynamo will automatically scale every time the read and/or write traffic of the table fluctuates.
- Use this mode when you don't know what workloads you are facing or when they are unpredictable.
- Provisioned: Means that you will specify the exact capacity units, or by enabling Auto Scaling you can choose a limit of minimum and maximum capacity units in writing and reading.
- Use this mode when you know the size of the data and when you know if they vary and by how much they vary. Also when you have metrics on how often you will write or read that data.
- If you want to use this mode but are not clear about the capacity units, you can initially set it to On Demand mode or use Auto Scaling in provisioned mode and use a measurement metric in CloudWatch to know how many read and write units you need. Example:
{
"MyTableReadCapacityUnitsLimitAlarm": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName": "myTable-ConsumedReadCapacity",
"AlarmDescription": "Alarm when read capacity reaches 80% of my provisioned read capacity",
"AlarmActions": [{ "Ref": "AlarmEmailNotificationSnsTopic" }],
"Namespace": "AWS/DynamoDB",
"MetricName": "ConsumedReadCapacityUnits",
"Statistic": "Sum",
"Period": "60",
"Dimensions": [
{
"Name": "MyTableName",
"Value": "arn:aws:dynamodb:{region}:{IdAccount}:table/MyTable"
}
],
"EvaluationPeriods": "1",
"ComparisonOperator": "GreaterThanOrEqualToThreshold",
"Threshold": "240",
"Unit" : "Count"
}
}
}
For more cases and examples: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/creating-alarms.html
If you don't know how to implement metrics, alarms, and monitoring in Cloudwatch, take a look at this post: https://www.kranio.io/blog/metricas-y-alarmas-de-recursos-aws
Tip: You can change the read/write capacity mode every 24 hours on a table.
Alright, but up to this point we don't know how a capacity unit is defined, so let's get to it.
For on-demand tables, read units are defined as follows:
- A strongly consistent read request of up to 4 KB requires one read request unit.
- An eventually consistent read request of up to 4 KB requires half a read request unit.
- A transactional read request of up to 4 KB requires two read request units.
For provisioned tables, read units are defined as follows:
RCU [read capacity unit]
- 1 RCU is equivalent to one strongly consistent read per second for an item up to 4 KB
- 1 RCU is equivalent to two eventually consistent reads per second for an item up to 4 KB
- 2 RCU is equivalent to one transactional read per second for items up to 4 KB
For example: Imagine you have an 8 KB .xml file
If you want a strongly consistent read, you need two RCUs.
If you want an eventually consistent read, you need one RCU.
If you want a transactional read, you need 4 RCUs.
WCU [write capacity unit]
- 1 WCU is equivalent to one write per second for an item up to 1 KB
- 2 WCU is equivalent to one transactional write per second for items up to 1 KB.
For example: You have your 8 KB .xml file
If you want to sustain a standard write request per second, you need 8 WCUs.
If you want to sustain a transactional write request, you need 32 WCUs.


💡 More information about RCU and WCU:
The default configuration that Dynamo provides when you are not clear about any of the above is the following:

This is fine for testing, but for real projects, you need to know your data and its processes to achieve efficiency and take advantage of all the possibilities that Dynamo offers by adapting it 100% to you.
Basic anatomy of a Dynamo table

Primary Key: Must be unique and can be of two types: partition key or composite primary key
Partition Key: If the table only has a partition key, there cannot be two items with the same partition key value.
Composite primary key: It is a combination of partition key and sort key. If the table has a composite primary key, two items can have the same partition key value. However, those items must have different sort key values.
Sort Key: Must be unique, also known as sort key.
Items: Each table contains zero or more items. An item is a group of attributes that can be uniquely identified among all other items.
Attributes: Each item is composed of one or more attributes. An attribute is a fundamental data element, something that does not need to be broken down further.
DAX DynamoDB
Another concept to consider when working with data focused on consistency and high availability is data caching. Although there is another AWS service called Elasticache for this purpose, DynamoDB has its own sub-service focused on caching, which is Dynamo Accelerator (DAX). DAX consists of two caches located between the Dynamo data table and the client: one is the item cache and the other is the query cache.

The flow when using DAX is as follows:
For a [key-value] query from the client application, it looks in the Cache memory [DAX]. If it exists, the value is returned. Otherwise, it goes to look in the DynamoDB table, returns it to the client, and saves it in the cache memory [DAX]. If the DAX cluster has more than one node, the item is replicated on all nodes to ensure consistency.
For error handling in Dynamo
Proper error handling in any development is always fundamental to the quality of that development and says a lot about the control the developer has over the code, or in this case, over an AWS resource.
Fortunately, AWS provides good documentation about errors and their use. Still, let's see some general points.
If you are implementing Dynamo with the SDK, you have HTTP responses of 3 types:
- 2xx for a successful operation.
- 4xx for an unsuccessful operation. This type of error will have 3 components
- HttpCode
- Name that Dynamo gives to the exception or problem it identified.
- An error message that explains the cause of the error more clearly.
- The available 4xx errors are:
- AccessDeniedException, ConditionalCheckFailedException, IncompleteSignatureException, ItemCollectionSizeLimitExceededException, LimitExceededException, MissingAuthenticationTokenException, ProvisionedThroughputExceeded, ProvisionedThroughputExceededException, RequestLimitExceeded, ResourceInUseException, ResourceNotFoundException, ThrottlingException, UnrecognizedClientException, ValidationException
- 5xx for an AWS internal problem. It can also be a transient error (such as operational unavailability), which can be resolved with a retry. The available 5xx errors are:
- Internal Server Error (HTTP 500)
- Service Unavailable (HTTP 503)
To make good use of these errors, it is recommended to insert them in try-catch blocks.
💡 You can read more about error handling here: https://n9.cl/ugvbo
💡 Tip: Dynamo has special handling for errors due to the use of DynamoDB Transactions, which consist of performing multiple reads/writes in a single request. The most common error is conflicts, which are:
- A PutItem, UpdateItem, or DeleteItem request for an item conflicts with an ongoing TransactWriteItems request that includes the same item.
- An item included in a TransactWriteItems request is part of another ongoing TransactWriteItems request.
- An item included in a TransactGetItems request is part of another ongoing TransactWriteItems, BatchWriteItem, PutItem, UpdateItem, or DeleteItem request.
To better understand what DynamoDB Transactions are: https://n9.cl/b9p0x
Alright, now you have knowledge about what Dynamo is, how it works, and some special considerations for the proper management of your data. Remember to follow this advice: analyze the data before starting to model, understand its purpose and concurrency of reads and writes, with that in mind you will have more control and consequently more efficiency.
Ready to optimize your projects with DynamoDB on AWS?
At Kranio, we have data solutions experts who will help you implement best practices and efficient strategies using DynamoDB, ensuring scalability and performance of your applications. Contact us and discover how we can drive your company's digital transformation.
Previous Posts

Augmented Coding vs. Vibe Coding
AI generates functional code but does not guarantee security. Learn to use it wisely to build robust, scalable, and risk-free software.

Kraneating is also about protection: the process behind our ISO 27001 certification
At the end of 2025, Kranio achieved ISO 27001 certification after implementing its Information Security Management System (ISMS). This process was not merely a compliance exercise but a strategic decision to strengthen how we design, build, and operate digital systems. In this article, we share the process, the internal changes it entailed, and the impact it has for our clients: greater control, structured risk management, and a stronger foundation to confidently scale systems.
