StatelyDB vs. DynamoDB
StatelyDB currently uses DynamoDB as its first storage engine. We chose DynamoDB because we wanted to build on a solid foundation, and our experience building huge systems at Snapchat and Amazon taught us that DynamoDB is the best choice for building systems that can safely scale to high usage without becoming an operational burden. We use DynamoDB as a simple storage layer, but that means when you use StatelyDB, you can rely on AWS to handle availability, durability, and backup of your data. Why invent the wheel on how to save bytes to disks?
While StatelyDB is its own database, it does take a lot of design inspiration from DynamoDB in addition to using it as a storage layer. Like DynamoDB, we have a simple API, a focus on partitioning data, and the ability to save multiple document types in a single hierarchy. This page compares how StatelyDB differs from just using DynamoDB directly through the AWS SDK.
At a high level:
- StatelyDB offers a higher-level, developer-friendly API with built-in schema management, validation, and automatic backwards compatibility, while DynamoDB requires manual handling of schemas and data mapping.
- StatelyDB supports advanced features like serializable transactions, delta sync, flexible indexing, and generated client code for multiple languages, which are limited or unavailable in DynamoDB.
- StatelyDB is designed for cost-effective single-table design, whereas DynamoDB requires more manual setup for these capabilities.
- StatelyDB has an active roadmap for cross-cloud support, regionalization, and integrated caching, aiming to extend beyond DynamoDB’s native features.
Detailed Comparison
Section titled “Detailed Comparison”StatelyDB | DynamoDB | |
---|---|---|
APIs | ||
CRUD APIs | Yes, basic Get, Put, and Delete. All APIs accept batches. | Yes, GetItem, PutItem, UpdateItem, DeleteItem. There are separate batch versions of all of them. |
Query/List APIs | Yes, paginated List and Scan with pagination via a continuation token. | Yes, Query and Scan with pagination via a last evaluated key. |
Transactions | Yes - Interactive, serializable transactions. A custom transaction system allows for lower costs for transactions within a group. | Kinda - Only TransactWriteItems and TransactGetItems batch APIs. Read-modify-write transactions are left as an exercise for the reader. Transactions cost 2X non-transactional. |
Delta Sync | Yes, SyncList with awareness of the current pagination window. | No |
Append New Item | Yes, with ID generators for sequential IDs, random UUIDs, and random uint64. All generated IDs enforce uniqueness of the resulting item. | No |
Indexes | Yes, key path aliases transactionally save multiple copies under different keys. Indexing multiple attributes is supported, with proper sorting. | Dynamo LSI/GSI, for single attributes (or DIY multi-attribute) |
Client SDK | High level - developers use the objects they defined in schema and simply Put, Get, Query, etc. | Low level and verbose - developers must map objects into attributes and understand DynamoDB rules, construct update expressions and conditions, etc. |
Change Streaming | Yes, via a conversion library that translates DDB streams into StatelyDB Items | Yes |
Eventually Consistent Reads | Opt-in | Opt-out |
Schema and Data Model | ||
Schema | Yes - Define your Elastic Schema using an easy to write TypeScript DSL | No, schema is implicit in application code |
Automatic Backwards Compatibility | Yes, for all past schema versions | No, data versioning and compatibility must be handled manually in code |
Generated Client Code | Yes, in JS/TS, Go, Python, and Ruby. More are quick to add. | No, AWS SDK only |
Custom data validation | Yes, schemas are typed and allow custom validation expressions | No |
AI Schema Design and Migration Assistant | Yes, via an MCP server in VSCode or Claude Desktop | No, AI copilots cannot validate the safety of proposed DDB code |
Single-table Design | Yes, our schema encourages a cost-effective single-table design that allows fetching multiple types of items in a single query | DIY, if you can figure out how to do it yourself |
Data Catalog | Yes, integrated into our schema system—you can use APIs to determine exactly what kinds of data are stored | No |
Infrastructure | ||
Deployment Model | Sidecar + customer-managed table or a (hosted serverless API)[/deployment/serverless/]. | Serverless API |
Data Migrations & Backfills | Yes, running on either your service sidecar or dedicated containers | No |
Regionalization | On the roadmap - group homing, migration API, and automatic re-homing. | Only via Global Tables (multiply cost x regions) |
Cross-Cloud Support | On the roadmap | Never going to happen |
Performance/Cost Overhead | Low - our high-performance custom Go DynamoDB client means our sidecar uses very few resources and has a much higher throughput ceiling (~2x) than even using the AWS SDK for Go directly. | Moderate - You’ll use the AWS SDKs here too |
Integrated Caching | On the roadmap | Yes via a separate DAX cluster |
Self Service Setup | Yes | Yes |
Cost Tuning | Yes, you can enable/disable functionality per-key to optimize between cost and functionality | No |
Compare the Code
Section titled “Compare the Code”Writing a simple User object to DynamoDB using their SDK (see the full code):
func NewDynamoDBClient(ctx context.Context, tableName string) (*DynamoDBClient, error) { cfg, err := config.LoadDefaultConfig(ctx) if err != nil { return nil, fmt.Errorf("unable to load SDK config: %w", err) }
client := dynamodb.NewFromConfig(cfg)
return &DynamoDBClient{ client: client, table: tableName, }, nil}
var emailRegex = regexp.MustCompile(`[^@]+@[^@]+`)
func (c *DynamoDBClient) CreateUser(ctx context.Context, displayName, email string) (*User, error) { if displayName == "" { return nil, fmt.Errorf("display name cannot be empty") } if email == "" { return nil, fmt.Errorf("email cannot be empty") } if !emailRegex.MatchString(email) { return nil, fmt.Errorf("invalid email format") }
user := &User{ ID: uuid.New(), DisplayName: displayName, Email: email, }
// Create the main user record userAV, err := attributevalue.MarshalMap(user) if err != nil { return nil, fmt.Errorf("failed to marshal user: %w", err) } userAV["PK"] = &types.AttributeValueMemberS{Value: fmt.Sprintf("USER#%s", user.ID.String())} userAV["SK"] = &types.AttributeValueMemberS{Value: "METADATA"}
// Create the email lookup record with full user data emailAV := maps.Clone(userAV) emailAV["PK"] = &types.AttributeValueMemberS{Value: fmt.Sprintf("EMAIL#%s", email)} emailAV["SK"] = &types.AttributeValueMemberS{Value: "METADATA"}
_, err = c.client.TransactWriteItems(ctx, &dynamodb.TransactWriteItemsInput{ TransactItems: []types.TransactWriteItem{ { Put: &types.Put{ TableName: aws.String(c.table), Item: userAV, }, }, { Put: &types.Put{ TableName: aws.String(c.table), Item: emailAV, ConditionExpression: aws.String("attribute_not_exists(PK)"), }, }, }, })
if err != nil { var txErr *types.TransactionCanceledException if errors.As(err, &txErr) { for _, reason := range txErr.CancellationReasons { if reason.Code != nil && *reason.Code == "ConditionalCheckFailed" { return nil, fmt.Errorf("email %s is already in use", email) } } } return nil, fmt.Errorf("failed to create user: %w", err) }
return user, nil}
While in StatelyDB, you get to use your own types (see the full code):
func (c *Client) CreateUser(ctx context.Context, displayName, email string) (*schema.User, error) { item, err := c.client.Put(ctx, &schema.User{ DisplayName: displayName, Email: email, }) if err != nil { return nil, err } return item.(*schema.User), nil}