DynamoDB Cheat Sheet

I have created a DynamoDB Cheat Sheet, which is based off of DynamoDB as of 11/14/2020. Does it include everything, of course not, and it may become obsolete at some point given the frequency of how often AWS updates their product line.  However, I will try to keep this updated as things change, and if you choose to refer to it, I would strongly suggest linking to it so you always view the latest content.


  • Data is stored across partitions, and partitions are stored on multiple servers.
  • Partition key needs to have a large number of unique values per the total  number of items stored, to ensure adequate  distribution over separate partitions
  • Spread data across partitions evenly to optimize read requests
  • ListTables operations can only return 100 tables per response, so use pagination if more than 100 exists.
  • DescribeTable is used to return the structure of a table
  • Tables are automatically replicated across 3 availability zones in a region for high availability and durability
  • Throughput per table R/W CU is 10,000
    • Per account is 20,000
  • Number of decreases for a table per day is 4 total between R/W CU
  • Each Partition
    • 10GB Max Storage
    • 3,000 RCU Max
    • 1,000 WCU Max


  • Must be <= 400kb each, which includes both attribute name binary length (UTF-8 length) and attribute value lengths (again binary length). The attribute name counts towards the size limit.
  • 1 RCU is used for a consistent read even if the item is smaller than 4kb
  • Optimize RCU by storing short attribute names
  • Reduce item size (RCU) by storing infrequently used attributes in a separate table to reduce the returned payload


  • A Null or Bool attribute is the length of the name + 1
  • List or Map requires 3 bytes of overhead in all cases
  • Types (here for more details)
    • Scalar (only one value)
      • String (max value of 4k)
      • Binary
      • Number
      • Boolean
      • Null — Nullable boolean
    • Set (Arrays of …)
      • String 
      • Number
      • Binary (BLOBS)
    • Document
      • Map — Unordered collection of name/value pairs accessible by pair name. Values can also be subcollections (complex JSON)
      • Up to 32 levels deep
      • List — Array of attribute values, of which can be different types (complex data storage), accessed by position index, not name.  Array with a string, number, and binary in one list.
    • Comments
      • Note three is no date type, use string or number type instead

Primary Keys

  • Partition Key
    • When used alone, it must unique across all items in the table. A hash of the key is generated from an internal hash function, which is then used to determine what partition is used to store the item on.  It is also known as the “Hash Key”
    • Supports data type of String, Binary, or Number
  • Partition Key and Sort Key
    • When a sort key is used along with the partition key, all items are stored in the same partition and sorted by the sort key.  It is also known as the “Range Key” since items are stored in sorted order by this key, within a partition.
    • Supports data type of String, Binary, or Number
    • The combination of the Partition Key and Sort Key must be unique across all items in the table (partition)

Capacity Units

  • General
    • Use the DescribeTable operation to see all provisions throughput settings for all indexes of a table
    • Capacity units are based off 4kb per read and 1kb per write
    • Use a monitoring tool such as CloudWatch and set alerts to notify when a certain level has been reached so that you can adjust as needed before running into performance problems.
    • Writes are automatically synced across 3 AZs, therefore can take up to 1 second to be fully consistent.
    • Overall scale and performance is dependent on read and write capacity units available
  • Read
    • Consistent, which can take up to 1 second to update all partitions across AZs vs Eventually Consistent which have double the throughput capacity.  A parameter must be set to use consistent reads.
    • It is important to predetermine data usage patterns, how many reads per second and how large of data to be returned. This helps determine your RCU needs.  For example number of reads needed per second, times the item size in 4kb blocks
    • RCU used in a read request is the size of all attributed and items returned from a query and rounded to the next 4kb boundary.
    • Reads for nonexistent items still use 1RCU
    • RCU number of 4k reads that are strongly consistent. So 1 RCU equivalates to 1 4k read per second.  For example, a request of 5 RCU would allow 5 consistent reads each of 4kb, per second.  For eventual consistent reads, an RCU is double at 2 per second instead of 1
    • A consistent read of a 38kb item would need 10 RCU as an example.
    • Query response cannot be larger than 1 megabyte in size including all items and attributes returned.
    • GetBachItem gets each item separately, so each item is rounded up separately and then returned.
    • Query results are returned as one read operations and only rounded once after all items are summed up.
    • A scan is calculated off the items scanned, not returned so be careful how you use a Scan operation.  Only 1mb can be returned at most.
    • Example of determining provisioning needs:
ExpectedConsistencyReads / SecondRequired
8kbEventually Consistent10050
16kbEventually Consistent100100
  • Exceeding provisions read capacity and a request will be throttled
  • When writing to a table and the GSI has insufficient write capacity, the write on the table will be throttled
ExpectedWrites / SecondRequired


  • General
    • Are partitioned just like tables are
    • Provides a way to do sorting and filtering efficiently and cost effective by avoiding table scans.
    • Store only the fields you need to improve performance and data consumption
    • Items smaller than 1kb will still use 1 entire write capacity unit when writing to the index adding more attributes up to 1kb will not cost any extra.
    • Sparse indexes can be beneficial in times where attributes do not appear in all items.  Add an extra field to mark the item and index on that field.  Remove the field when it no longer applies. For example, InActive customer can have that field only when they are not active.
    • Item collections (all items and local indexes) cannot exceed 10gb, the maximum partition size. It is important to take this into consideration when modeling your partition key and index attributes.
    • The total item collection size can be reduced by splitting into multiple partitions, such as adding a random number 1-100 at the end, and then concurrently querying all partitions instead of just the one.
    • Key Specification when creating indexes
      • Keys_Only – Only table partition and sort key are included and will create the smallest possible index
      • Include – Specify additional non-key fields you want to include, they will become part of the table as a result.
      • All – Every attribute is projected and creates the largest possible index.
  • Local (Maximum of 5 per table)
    • Created on the same partition, but uses a different sort key
    • Must be created at the time of table creation, not after
    • Combination of the partition key and range key must be unique (composite key)
    • Attributes not projected in an LSI can be retrieved also, dynamodb will do a table fetch as part of the query and read the entire item, resulting in latency, I/O operations,  and a higher throughput cost.
    • Updates are synchronous as part of the put/delete/update
    • Read and write capacity units are shared with the table
    • Data is copied to local indexes and will incur additional data charge
  • Global (Maximum of 5 per table)
    • Can be created after the table has been created
    • Updates are asynchronous (eventually consistent) as part of the put/delete/update
    • Eventually consistent reads consume ½ of a RCU 2x4kb=8kb
    • Uses it’s own read and write capacity units and not shared with the table
    • The partition key can be different from the table partition key since GSI are stored in a completely separate partition from the table partition.
    • Put/Delete/Update to a table can also use WCU on a GSI as well
    • Combination of partition key and range key do not need to be unique
    • Can only return attributes inside of the index, can not retrieve from parent table
    • Space consumed by a global secondary indexed item is the sum of:
      • Byte size of the key (partition and sort )
      • Byte size of the index key attribute
      • Byte size of the projected attributes (if any)
      • 100 bytes of overhead per index item

Global Tables

  • Fully managed across multiple regions (specified by the user) and supports multi-master implementations
  • 1 replica per region, each replica having the same name
  • Writes automatically propagated across regions
  • Read and Write across regions with single digit millisecond latency
  • Applications can CloudWatch metrics to monitor for degradation and route connections to a different region
  • DynamoDB will track all writes and when a region is restored, any pending writes will be automatically propagated
  • Replication across regions is usually completed in under 1 second
  • Last Write Wins (LLW) is used when update conflicts occur

Transactions (ACID)

  • Supported across a single region and AWS Account
  • Supports Put, Update, and Deletes
  • Support for eventual/strong consistency reads, and transactional read requests to allow for the cancelation of any requests accessing items currently in a transaction
  • No additional costs, however 2 reads or writes are performed for each item in the transaction (prepare and commit)

DynamoDB Accelerator (DAX)

  • A in-memory cache layer that can deliver a performance increase of upto 10x
  • Can support millions of requests per second in milli to microsecond
  • Compatible with existing API calls (no code changes)
  • No need to build in cluster management, data population, or cache invalidation

Point in time recovery (PITR)

  • Recovery Time as current as 1 second (restore data upto the second) with a Recovery Point as far back as 35 days in the past
  • Backup 100s of terabytes with no performance impacts
  • Backups are automatically encrypted

Data Access

  • No query language like SQL
  • There are no join capability as in sql rdbms has.
  • Must be done via the API using Java, JavaScript, C#, Go, Python, or other supported languages.
  • CLI can also be used
  • AWS Data Pipeline
  • Third party tools
  • Needs to be thought out ahead of time to design for optimal cost and performance.