# StatelyDB Documentation > StatelyDB delivers schema intelligence built for the agentic era - where data models evolve in real-time and multiple versions operate simultaneously without conflict. # Introduction ## What is StatelyDB? (intro/what-is-statelydb) **StatelyDB is a serverless document database that solves the schema evolution problem.** Unlike traditional NoSQL databases where schema changes require careful coordination and manual data migration, StatelyDB automatically handles schema evolution while maintaining backwards compatibility with all existing applications. StatelyDB currently uses DynamoDB as its storage engine, giving you all the scalability and reliability benefits of AWS’s managed database service, but with a dramatically improved developer experience through our **Elastic Schema** system. We were inspired by DynamoDB when we decided to build StatelyDB, but we wanted to push *far* further into the vision of what you could do with an abstraction layer over a NoSQL datastore, especially once you have a schema that describes how the database should handle your data. StatelyDB’s signature feature is [**Elastic Schema**](/intro/elastic-schema), which lets you describe your data model, generate code to work with those types, and then enables you to change your data model at any time. The database automatically migrates data where necessary, while **maintaining backwards compatibility** with all previous versions of schema. This eliminates the need to coordinate deployments or carefully consider the impact of data model changes on existing clients. At its core, StatelyDB: * Is a partitioned document database that prioritizes scalability and low operational overhead. * Provides a developer-friendly API that you interact with via generated code that reflects your own data types. * Uses Elastic Schema to describe your data model, generate type-safe code, and provide automatic backwards compatibility as you change your data model. Elastic Schema makes change **safe, easy and predictable**. * Makes it easy to build single-table data models that optimize for efficiently getting all the data for a use case in a single request. ## How StatelyDB Works StatelyDB sits between your application and DynamoDB, providing: 1. **Schema Management**: Define your data model once, generate type-safe code for multiple languages 2. **Automatic Migrations**: When you change your schema, StatelyDB handles data transformation automatically 3. **Backwards Compatibility**: Old applications continue working even after schema changes 4. **Optimized APIs**: Higher-level operations than raw DynamoDB, with better performance ![This diagram compares a traditional app using DynamoDB (with a fat application layer and manual schema management) to a StatelyDB app (with a thin application layer, Elastic Schema, and code generation).](/_astro/stately-comp.9cA51ads_ZAzh6R.svg) ## Frequently Asked Questions #### What does StatelyDB have to do with DynamoDB? StatelyDB uses DynamoDB as a simple storage engine to persist and retrieve data. DynamoDB is a managed database service from AWS that’s built to scale forever with low operational overhead. We chose it as our first storage engine because we were inspired by its design, and we have experience operating it at scale. We also wanted to be able to offer world-class guarantees for scalability, availability and durability from day one. See [StatelyDB vs. DynamoDB](/intro/statelydb-vs-dynamodb) for more details. Eventually we will add more storage engines to provide cross-cloud capabilities or offer different price/performance tradeoffs. #### So is StatelyDB just a wrapper around DynamoDB? StatelyDB is a new database with its own API and a growing set of advanced capabilities. We just use DynamoDB for basic storage operations. On top of that, StatelyDB layers on a rich Elastic Schema system, powerful SDKs, and highly optimized APIs. See [StatelyDB vs. DynamoDB](/intro/statelydb-vs-dynamodb) for more details. #### Is StatelyDB just an ODM (Object Document Mapper) or ORM for DynamoDB? It could be tempting to think of what Elastic Schema does like an Object Document Mapper. ODMs are usually client libraries or code generators that help map your in-memory models to a document database’s APIs (for relational databases this is called an Object Relational Mapper or ORM). An ODM is effectively an application layer solution—it is used in your code to make it easier to work with a database. In contrast, in StatelyDB the database itself knows about your schema and can take actions and make decisions based on it, including validating your data, transforming it between versions, and providing backwards compatibility with older versions. Regardless of how many applications talk to the database, all of them gain these benefits because in StatelyDB, the schema is a data layer concern, not an application layer concern. #### Does it use the same API as DynamoDB? StatelyDB has its own API that’s both easier to use and more powerful than DynamoDB’s. Importantly, StatelyDB’s APIs speak in terms of your own data model as expressed through schema. This means you save a User object or an Order object, instead of saving a JSON document or a map of key/value pairs. #### Can I use StatelyDB with an existing DynamoDB table? Because DynamoDB is an implementation detail of StatelyDB’s storage engine, we have specific ways we store data in and interact with DynamoDB that wouldn’t work with an existing table. And, if you had applications that talked directly to the DynamoDB table and through the StatelyDB API, we could not guarantee correctness like we normally would. However, we do have an idea for how to “adopt” existing DynamoDB tables into StatelyDB and provide a gradual migration path. If this is interesting to you, [please reach out!](mailto:support@stately.cloud) We’d love to partner with you as a reference customer for this feature. #### How does this compare to other NoSQL databases like MongoDB? StatelyDB is similar to other NoSQL document databases like MongoDB, especially their hosted offering. MongoDB currently has some features that StatelyDB does not, but it crucially lacks schema tools. You can either use a third-party ODM, or leverage some built-in JSON schema for validation, but there are no provided tools for migrating data or managing backwards compatibility. #### How does this compare to relational/SQL databases like Postgres? Partitioned document databases like StatelyDB are very different from SQL databases like Postgres. While relational databases focus on flexible and powerful query capabilities, they fall down in scalability and consistent performance. As your stored data grows and your access patterns get more complicated, performance and operational problems crop up and you end up in a lot of trouble trying to scale out your data layer. In contrast, partitioned document databases like StatelyDB don’t offer complex query languages and instead encourage you to write data in a structure that is most efficient for it to be read, ensuring consistent performance no matter how much traffic you get and how much data you’ve stored. Lots of folks are working on ways to make SQL databases scale, and we wish them luck, but that’s not the approach we’ve taken when building huge applications at Snapchat and Amazon. #### What do you mean DynamoDB is StatelyDB’s *first* storage engine? It means that in time, we’ll have other storage engines that offer different properties—lower costs, higher performance, or availability outside AWS are all considerations. Since our data layer is abstracted from the schema and API layers, we can swap DynamoDB out for other options as necessary. If you have a different backend you’d like, [please reach out!](mailto:support@stately.cloud) We’d love to partner with you as a reference customer for this feature. #### Is StatelyDB only available as a hosted API? While our hosted service is an easy way to get started and can scale up to huge workloads, we anticipate that most users will want to user our [BYOC (Bring Your Own Cloud)](/deployment/byoc/) deployment model that keeps your data and AWS resources in your own AWS account. You can [try either mode for free](/deployment/) today! #### How much does StatelyDB cost? StatelyDB is free to try in both our hosted and BYOC models. Once you’ve had a chance to try it out, [contact us for pricing and support](mailto:support@stately.cloud) in adopting StatelyDB. #### What programming languages are supported? We currently support Go, Python, JavaScript/TypeScript, and Ruby. If you would like to use a different language, [let us know](mailto:support@stately.cloud) as we can quickly spin up new SDKs. ## What is Elastic Schema? (intro/elastic-schema) ``` ``` StatelyDB features Elastic Schema, which lets you: * Describe your data model with real types and integrated validation, enforcing that your data has the shape you want. * Generate code in your favorite language that works with your database types and removes the need to write object/document mapping code. * Change your data model any way, at any time. The database automatically migrates data where necessary, while maintaining backwards compatibility with all previous versions of schema. This eliminates the need to coordinate deployments or carefully consider the impact of data model changes on existing clients. ## Schema describes your data model A schema is a description of all the data types in your database. This means all the different kinds of objects, and the fields they can have. Many databases have the concept of a schema, though often “NoSQL” or document databases claim to not have one. Even if your database lets you store arbitrary JSON, you still have a schema - it’s just *implicit* in your application code. StatelyDB uses an explicit schema language, written in TypeScript, to define your data model and give the database information it can use to help you out, whether that’s by validating data, optimizing storage, or transforming it. Your schema is a single source of truth for what kinds of data your database can contain, and you can share that common definition between clients in multiple languages. As an example, here’s a simple User object in StatelyDB Elastic Schema: ``` import { itemType, string, timestampSeconds, uint, uuid, } from "@stately-cloud/schema"; /** A user of our fantastic new system. */ itemType("User", { keyPath: "/user-:id", fields: { id: { type: uuid, initialValue: "uuid" }, displayName: { type: string }, email: { type: string }, lastLoginDate: { type: timestampSeconds }, numLogins: { type: uint }, }, }); ``` ## Elastic Schema lets you change your mind While many databases have some way to express your data’s schema, Elastic Schema takes it further by allowing you to change your schema any way you want, any time. When you change your schema, earlier versions of schema are *still valid* and can be used by existing or new clients. Since StatelyDB knows about all your different schema versions, it will automatically transform data between the version that’s actually stored in the database and the version that each client expects. In many cases this transformation can be done without having to update anything in the database itself, meaning the “migration” is free. ![Elastic Schema diagram showing how multiple client versions collaborate on the same data model](/_astro/elastic-schema-diagram.b_hx_7v5_Z2dcvby.svg) ## Always backwards (and forwards) compatible Backwards compatibility means that a new system (in this case a new database schema) is still compatible with older data and clients. Think of it like a video game console that can still play old games from a previous generation, like the Wii playing GameCube games. Forwards compatibility is more rare—it means that an old system can still handle data created by a newer version of a system. To stretch an analogy, this is like how the original Game Boy could play Game Boy Color games (in greyscale!). It’s possible to manually maintain backwards and forwards compatibility as you evolve a data model by carefully following a set of rules about which changes are allowed and which aren’t, and then managing the difference in your application code. For example, say you wanted to change a field’s data type from a `string` to a `uint64`. You would have to follow a sequence of steps: 1. Add a new `uint64` field alongside the original `string` field. 2. Update your code to read the `string` field and write *both* the `string` and `uint64` field. You can’t stop writing the `string` field if old clients still use it! 3. Eventually *after old clients are deprecated*, you can stop writing the `string` field. 4. Write a backfill program to scan through all old records that only have the `string` field and copy the value over to the `uint64` field. 5. Go back and clean up the code that handles both fields—now you can use just the `uint64` version. 6. Finally you can write another backfill program to scan through all the old records and remove the `string` field entirely to save storage space. As you can imagine, this is easy to mess up, and after you’ve made a lot of changes like this, your application code will grow and grow as more of these cases are added. And this example is for one of the simpler kinds of change! Elastic Schema handles this all for you in the database. Under the hood, it’s doing much the same kind of work to maintain backwards and forwards compatibility, but your application doesn’t need to worry about any of it. In StatelyDB, you’d just change that `string` field to a `uint64`, and update your code to use the new type. Your new code doesn’t need to handle the old data, or worry about compatibility, while old code that expects a `string` keeps working. And when you eventually deprecate the old schema version, the database will automatically clean up after itself. ## More than an `ALTER TABLE` In SQL databases there is a system for making changes like that called DDL (Data Definition Language). For example, the data type change above could be done with: ``` ALTER TABLE table_name ALTER COLUMN column_name TYPE INTEGER USING column_name::integer; ``` There are some major differences between this and what Elastic Schema does, though: 1. This statement will immediately lock the table (so no writes can happen) and update every row in the table to use the new format. If this takes a long time, you’ll have an outage. 2. If any values can’t be converted into an integer, the operation will fail. 3. Most importantly, *any code that treats this column as a string will now break*. How could you do this safely? Even ignoring the table lock, how could you sequence deploying your code with this operation? There are ways to do it (for example, updating your SQL queries to cast the column back to a string, deploy that change, then run the DDL, then update your SQL queries to remove the cast) but again, it’s very complex and error prone. And what happens if you have to roll back your server after you updated the queries? The older code won’t work against the new database schema and your app will break. In contrast, Elastic Schema migrations don’t lock the database or disrupt any existing usage. They gracefully handle when existing data isn’t compatible, and they provide automatic backwards compatibility so existing code will keep working, and you can deploy the new code whenever you want—and roll back safely. ## StatelyDB vs. DynamoDB (intro/statelydb-vs-dynamodb) StatelyDB currently uses DynamoDB as its *first* storage engine. We chose DynamoDB because we wanted to build on a solid foundation, and our experience building huge systems at Snapchat and Amazon taught us that DynamoDB is the best choice for building systems that can safely scale to high usage without becoming an operational burden. We use DynamoDB as a simple storage layer, but that means when you use StatelyDB, you can rely on AWS to handle availability, durability, and backup of your data. Why invent the wheel on how to save bytes to disks? While StatelyDB is its own database, it does take a lot of design inspiration from DynamoDB in addition to using it as a storage layer. Like DynamoDB, we have a simple API, a focus on partitioning data, and the ability to save multiple document types in a single hierarchy. This page compares how StatelyDB differs from just using DynamoDB directly through the AWS SDK. At a high level: * **StatelyDB offers a higher-level, developer-friendly API** with built-in schema management, validation, and automatic backwards compatibility, while DynamoDB requires manual handling of schemas and data mapping. * **StatelyDB supports advanced features** like serializable transactions, delta sync, flexible indexing, and generated client code for multiple languages, which are limited or unavailable in DynamoDB. * **StatelyDB is designed for cost-effective single-table design**, whereas DynamoDB requires more manual setup for these capabilities. * **StatelyDB has an [active roadmap](/intro/roadmap)** for cross-cloud support, regionalization, and integrated caching, aiming to extend beyond DynamoDB’s native features. ## Detailed Comparison | | StatelyDB | DynamoDB | | -------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | **APIs** | | | **CRUD APIs** | **Yes**, basic [Get](/api/get/), [Put](/api/put/), and [Delete](/api/delete/). All APIs accept batches. | **Yes**, GetItem, PutItem, UpdateItem, DeleteItem. There are separate batch versions of all of them. | | **Query/List APIs** | **Yes**, paginated [List](/api/list/) and [Scan](/api/scan/) with pagination via a continuation token. | **Yes**, Query and Scan with pagination via a last evaluated key. | | **Transactions** | **Yes** - Interactive, **serializable** [transactions](/api/transaction/). A custom transaction system allows for lower costs for transactions within a group. | **Kinda** - Only TransactWriteItems and TransactGetItems batch APIs. Read-modify-write transactions are left as an exercise for the reader. Transactions cost 2X non-transactional. | | **Delta Sync** | **Yes**, [SyncList](/api/sync/) with awareness of the current pagination window. | **No** | | **Append New Item** | **Yes**, with [ID generators](/schema/fields#initial-value-fields) for sequential IDs, random UUIDs, and random uint64. All generated IDs enforce uniqueness of the resulting item. | **No** | | **Indexes** | **Yes**, [key path aliases](/schema/keypaths/#multiple-key-paths) transactionally save multiple copies under different keys. Indexing **multiple attributes** is supported, with proper sorting. | **Dynamo LSI/GSI**, for **single** attributes (or DIY multi-attribute) | | **Client SDK** | **High level** - developers use the objects they defined in schema and simply Put, Get, Query, etc. | **Low level and verbose** - developers must map objects into attributes and understand DynamoDB rules, construct update expressions and conditions, etc. | | **Change Streaming** | **Yes**, via a conversion library that translates DDB streams into StatelyDB Items | **Yes** | | **Eventually Consistent Reads** | **Opt-in** | **Opt-out** | | | **Schema and Data Model** | | | **Schema** | **Yes** - Define your [Elastic Schema](/intro/elastic-schema) using an easy to write TypeScript DSL | **No**, schema is implicit in application code | | **Automatic Backwards Compatibility** | **Yes**, for all past schema versions | **No**, data versioning and compatibility must be handled manually in code | | **Generated Client Code** | **Yes**, in [JS/TS](/clients/javascript), [Go](/clients/go), [Python](/clients/python), and [Ruby](/clients/ruby). More are quick to add. | **No**, AWS SDK only | | **Custom data validation** | **Yes**, schemas are typed and allow custom validation expressions | **No** | | **AI Schema Design and Migration Assistant** | **Yes**, via [an MCP server](/schema/ai-copilot) in VSCode or Claude Desktop | **No**, AI copilots cannot validate the safety of proposed DDB code | | **Single-table Design** | **Yes**, our schema encourages a cost-effective [single-table design](https://aws.amazon.com/blogs/compute/creating-a-single-table-design-with-amazon-dynamodb/) that allows fetching multiple types of items in a single query | **DIY**, if you can figure out how to do it yourself | | **Data Catalog** | **Yes**, integrated into our schema system—you can use APIs to determine exactly what kinds of data are stored | **No** | | | **Infrastructure** | | | **Deployment Model** | [Sidecar + customer-managed table](/deployment/byoc) or a (hosted serverless API)\[/deployment/serverless/]. | Serverless API | | **Data Migrations & Backfills** | **Yes**, running on either your service sidecar or dedicated containers | **No** | | **Regionalization** | *On the roadmap* - group homing, migration API, and automatic re-homing. | Only via Global Tables (multiply cost x regions) | | **Cross-Cloud Support** | *On the roadmap* | **Never** going to happen | | **Performance/Cost Overhead** | **Low** - our high-performance custom Go DynamoDB client means our sidecar uses very few resources and has a much higher throughput ceiling (\~2x) than even using the AWS SDK for Go directly. | **Moderate** - You’ll use the AWS SDKs here too | | **Integrated Caching** | *On the roadmap* | **Yes** via a separate DAX cluster | | **Self Service Setup** | **Yes** | **Yes** | | **Cost Tuning** | **Yes**, you can enable/disable functionality per-key to optimize between cost and functionality | **No** | ## Compare the Code Writing a simple User object to DynamoDB using their SDK ([see the full code](https://github.com/StatelyCloud/demo-w/blob/main/pkg/ddb/ddb.go)): ``` func NewDynamoDBClient(ctx context.Context, tableName string) (*DynamoDBClient, error) { cfg, err := config.LoadDefaultConfig(ctx) if err != nil { return nil, fmt.Errorf("unable to load SDK config: %w", err) } client := dynamodb.NewFromConfig(cfg) return &DynamoDBClient{ client: client, table: tableName, }, nil } var emailRegex = regexp.MustCompile(`[^@]+@[^@]+`) func (c *DynamoDBClient) CreateUser(ctx context.Context, displayName, email string) (*User, error) { if displayName == "" { return nil, fmt.Errorf("display name cannot be empty") } if email == "" { return nil, fmt.Errorf("email cannot be empty") } if !emailRegex.MatchString(email) { return nil, fmt.Errorf("invalid email format") } user := &User{ ID: uuid.New(), DisplayName: displayName, Email: email, } // Create the main user record userAV, err := attributevalue.MarshalMap(user) if err != nil { return nil, fmt.Errorf("failed to marshal user: %w", err) } userAV["PK"] = &types.AttributeValueMemberS{Value: fmt.Sprintf("USER#%s", user.ID.String())} userAV["SK"] = &types.AttributeValueMemberS{Value: "METADATA"} // Create the email lookup record with full user data emailAV := maps.Clone(userAV) emailAV["PK"] = &types.AttributeValueMemberS{Value: fmt.Sprintf("EMAIL#%s", email)} emailAV["SK"] = &types.AttributeValueMemberS{Value: "METADATA"} _, err = c.client.TransactWriteItems(ctx, &dynamodb.TransactWriteItemsInput{ TransactItems: []types.TransactWriteItem{ { Put: &types.Put{ TableName: aws.String(c.table), Item: userAV, }, }, { Put: &types.Put{ TableName: aws.String(c.table), Item: emailAV, ConditionExpression: aws.String("attribute_not_exists(PK)"), }, }, }, }) if err != nil { var txErr *types.TransactionCanceledException if errors.As(err, &txErr) { for _, reason := range txErr.CancellationReasons { if reason.Code != nil && *reason.Code == "ConditionalCheckFailed" { return nil, fmt.Errorf("email %s is already in use", email) } } } return nil, fmt.Errorf("failed to create user: %w", err) } return user, nil } ``` While in StatelyDB, you get to use your own types ([see the full code](https://github.com/StatelyCloud/demo-w/blob/main/pkg/client/client.go)): ``` func (c *Client) CreateUser(ctx context.Context, displayName, email string) (*schema.User, error) { item, err := c.client.Put(ctx, &schema.User{ DisplayName: displayName, Email: email, }) if err != nil { return nil, err } return item.(*schema.User), nil } ``` ## Our Roadmap (intro/roadmap) StatelyDB is a rapidly developing platform with a rich roadmap and a dedicated team that lives and breathes databases. Some of the functionality we’re excited to deliver includes but is in no way limited to: * **Regional homing and migration of data** - this allows for data to be spread across multiple regions, clouds, or storage engines in order to reduce latency to customers or reduce operational or storage costs. * **Cross-cloud support** - combined with the regional migration support above, this allows for shifting load between clouds for reduced costs and better co-locality with compute and other services, all with a uniform API. * **Automatic data tiering** - this utilizes the regional migration support above to instead move data across different storage engines or configurations (e.g. between DynamoDB Infrequent Access and regular tiers, or between DDB and S3) in order to save costs on data with different access patterns. The StatelyDB abstraction means an application can uniformly handle data regardless of whether it’s in the “hot set” or “cold set”. * **Policy-driven data management** - leveraging schema annotations to enable policies around where data can be homed and how it can be handled. For example tagging user data to automatically enable download-my-data and GDPR compliance. * **Extended index types** including vector, geo, and full-text, reducing the need to leverage other databases in tandem with DynamoDB. * **Integrated caching** driven from policies encoded as declarative schema annotations. * **Automatic aggregations and materialized views** that replace the need for complicated pipelines and streaming infrastructure. * **“Backend as a service” functionality** that allows for exposing data models through automatic APIs without needing an intermediate service. * **Item-level security rules** that help enforce access and update policies in a verifiable way. * **S3-backed “File” data type** that integrates large-blob storage with structured database storage. * **Automatic cost minimization based on usage** - using metrics as a feedback loop to automate configuration choices that can reduce operational costs. * **Proactive AI Schema Advice** - based on metrics and knowledge of your schema, StatelyDB can suggest changes that could improve your data model, saving costs or improving latency. * **Integrated realtime notifications** to enable realtime sync to further reduce costs. * **Generic mobile client SDK with offline sync** - rich local-first clients that reflect changes in real-time and integrate with popular mobile and web frameworks. If any of these sound exciting, [please reach out!](mailto:support@stately.cloud) We’d love to partner with you as a reference customer for new features. # Guides ## Getting Started (guides/getting-started) ## Create a Stately Cloud Account Before you can do anything else you need a Stately Cloud account. Your account is used for things like accessing the control panel and CLI. 1. Visit and log in with your preferred account. 2. You should now be logged in to the console, with a new, empty organization for yourself. 3. Click “Create Store” to get a new store and schema to use. Write down both the Store ID and Schema ID. ## Download the `stately` CLI Download our [CLI](/cli) for your platform of choice by visiting . The CLI is required for working with [schema](/guides/schema) and generating client SDKs. Windows users are recommended to use [WSL](https://learn.microsoft.com/en-us/windows/wsl/install). For operating systems with a Unix-y shell, you can run this command to install: ``` curl -sL https://stately.cloud/install | sh ``` Once the CLI has been installed, log in using the same account you used on the console: ``` stately login ``` ## Install an SDK Install the SDK for your language. We currently support [Go](/clients/go), [TypeScript/JavaScript](/clients/javascript), [Python](/clients/python) and [Ruby](/clients/ruby). If your favorite language isn’t one of those, let us know at . ### Go ``` go get github.com/StatelyCloud/go-sdk ``` ### Ruby ``` bundle add statelydb ``` ### Python ``` pip install statelydb ``` ### TypeScript ``` npm install @stately-cloud/client ``` ## Define a Schema (guides/schema) ``` ``` This guide will walk you though defining a basic StatelyDB Schema. Schema is how you define all the different data models you’ll store in the database - their structure, data types, and key paths. First you’ll get your development environment set up, then you’ll define and publish a schema. ## Prerequisites * Make sure you’ve followed all the steps in [Getting Started](/guides/getting-started). You will need to download the [StatelyDB CLI](/guides/getting-started), which is needed for working with schema definitions, and you should have a Store and an empty Schema created for you. * You will need to install [NodeJS](https://nodejs.org/en/download/package-manager/current) because our Schema language is based on TypeScript. ## Create your schema Stately [schemas](/concepts/schema) are TypeScript code that’s checked in to your code repository alongside your service code. You write your schema in TypeScript even if you’ll be using a Go, Ruby, or Python client. You describe all of your data types in code, and then use the Stately CLI to publish changes to it. Each StatelyDB [Store](/concepts/stores) has an associated schema, and multiple stores can share the same schema. In your code repository, use the [StatelyDB CLI](/guides/getting-started) to initialize a new schema package (it can be any directory name you want but we’ll call it `schema` here): ``` stately schema init ./schema ``` This will create a directory containing a new NodeJS package: ``` schema/.gitignore schema/package.json schema/README.md schema/schema.ts schema/tsconfig.json ``` Next we’ll need to install the NPM dependencies: ``` cd ./schema npm install # or whatever your favorite JavaScript package manager is ``` The `schema.ts` file is where you will write your schema, using helper methods imported from the [`@stately-cloud/schema`](https://www.npmjs.com/package/@stately-cloud/schema) package. We recommend opening the schema directory in [VSCode](https://code.visualstudio.com) since it has built-in support for TypeScript. If you use an AI coding assistant [we have tools](/schema/ai-copilot) that can help it write StatelyDB schemas for you. ## Add an Item Type [Item Types](/concepts/items) are the top-level documents in your database - you can have many different item types in a Store, and each item type can be manipulated our Data APIs. Many applications have some sort of “user” model, so let’s start with that: ``` import { itemType, string, timestampSeconds, uint, uuid, } from "@stately-cloud/schema"; /** A user of our fantastic new system. */ itemType("User", { keyPath: "/user-:id", fields: { id: { type: uuid, initialValue: "uuid" }, displayName: { type: string }, email: { type: string }, lastLoginDate: { type: timestampSeconds }, numLogins: { type: uint }, }, }); ``` There are a few things going on here: 1. First, we import a bunch of schema helpers from the `@stately-cloud/schema` package. VSCode can help you automatically add these imports. 2. We [declare a new Item Type](/schema/builder) to describe a User. 3. The User item type has a [key path](/concepts/keypaths) of `/user-:id`. This is like a “primary key” in other databases - it’s the path you’ll use to access a specific user. An individual User might have a key path like `/user-p05olqcFSfC6zOqEojxT9g`. 4. Lastly there are the actual [fields](/schema/fields) of the item. Each field has a name and a [type](/schema/data-types). * There are a few different data types in use - basic types like `string`, `uint` (unsigned integer), and more complex types like `uuid` and `timestampSeconds`. These are provided as part of `@stately-cloud/schema`, but you can also make custom types with the `type(...)` helper function. * The `id` field has an `initialValue`—this means that StatelyDB will pick a new UUID whenever you create a new User. This is just scratching the surface of what you can express with schema. See [Defining Item Types](/schema/builder) for more things to try. ## Generating Language-Specific Code You can now generate the typed client code for one of our supported programming languages. Generating the SDK can be done in one of two ways: preview or release. Preview mode generates the SDK based on your local changes, which is useful for rapidly iterating and integrating the generated code without having to put new schema versions. Release mode generates everything that preview mode does plus a customized client that speaks the specific version you generate it with. Release mode requires that you have published the schema version to Stately Cloud, so it can be nice to start with preview mode, then switch to release mode when you’re ready to integrate the client into your application. Generating an SDK in `--preview` mode is done via the [StatelyDB CLI](/guides/getting-started) with the command: ### Go ``` stately schema generate \ --language go \ --preview ./schema/schema.ts \ ./pkg/schema ``` ### Ruby ``` stately schema generate \ --language ruby \ --preview ./schema/schema.ts \ ./lib/schema ``` ### Python ``` stately schema generate \ --language python \ --preview ./schema/schema.ts \ ./src/schema ``` ### TypeScript ``` stately schema generate \ --language ts \ --preview ./schema/schema.ts \ ./src/schema ``` If you look in the `lib` directory, you’ll now see some files that export objects for all of your item types. You can iterate on your schema and generate a preview as many times as you wish. Once you’re satisfied you can publish the schema to Stately Cloud. ## Publishing your schema Once you have written out your schema definition, you need to [publish the schema](/schema/updating) into your Stately Cloud account. If you haven’t already, go into the [Console](https://console.stately.cloud) and create a new Store—it will come with a new Schema ID. Future stores can share that schema ID or start with a new one. If you already have a store, you can find the Schema ID of the schema you want to update in the [Console](https://console.stately.cloud) and in the output of [`stately whoami`](/cli#whoami)). Than, using the [StatelyDB CLI](/guides/getting-started): ``` stately schema put \ --schema-id \ --message "First version" \ path/to/schema.ts ``` ## Generating Language-Specific Code for release When you’re ready wire up the stately client in your application, you can generate the SDK in release mode. This will generate an SDK similar to `--preview` but with a customized client based on the schema ID and version that you have published to Stately Cloud. ``` stately schema generate \ --language go \ --schema-id \ --version \ ./lib ``` ## Create a Client (guides/connect) Let’s get your backend code talking to StatelyDB so you can start [reading and writing data](/api/put). ## Prerequisites 1. [Create a Stately account and a Store](/guides/getting-started#create-a-stately-cloud-account) 2. [Install an SDK for your language](/guides/getting-started#install-an-sdk) 3. [Define a schema and generate language-specific code](/guides/schema) ## Creating a Client ``` ``` When you [generated code for your schema](/guides/schema#generating-language-specific-code), you got a customized client creation function that knows about all of your schema’s types. You can import that function to create a client, and then use that client from your backend code to interact with StatelyDB. To authenticate your client to the StatelyDB service, you will need an Access Key. You can create access keys in the [console](https://console.stately.cloud) by clicking “Manage Access Keys”. The access key is a secret that can be passed explicitly when creating a client, or it will be picked up automatically from a `STATELY_ACCESS_KEY` environment variable. You will also need to specify your Store ID and the region for your store. Both of those are available in the [console](https://console.stately.cloud), or by running `stately whoami` with the CLI. The console also has a customized client configuration snippet for you to copy/paste, but it will look like: ### Go ``` package main import ( "context" // The StatelyDB SDK "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) // Create a client for interacting with a Store. func makeClient() stately.Client { // This uses the generated "NewClient" func in your schema package. return schema.NewClient( context.TODO(), 12345, // Store ID &stately.Options{Region: "us-west-2"}, ) } ``` ### Ruby ``` # This is the code you generated from schema require_relative "./schema/stately" client = StatelyDB::Client.new(store_id: 12345, region: "us-west-2") ``` ### Python ``` # Import from the package that you generated with `stately generate`. # We've used a relative import here for convenience but for you it might # be different. from .schema import Client client = Client(store_id=12345, region="us-west-2") ``` ### TypeScript ``` // This is the code you generated from schema // Create a Data client for interacting with a Store const client = createClient( 12345, // Store ID { region: "us-west-2" }, ); ``` ## Providing an explicit Access Key If you don’t want your access key to be read from environment variables, you can provide them explicitly: ### Go ``` package main import ( "context" // The StatelyDB SDK "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) // Create a client for interacting with a Store. func makeClientWithCredentials() stately.Client { // Use the generated "NewClient" func in your schema package. return schema.NewClient( context.TODO(), 12345, // Store ID &stately.Options{ Region: "us-west-2", AccessKey: "my-access-key", }, ) } ``` ### Ruby ``` # This is the code you generated from schema require_relative "./schema/stately" client = StatelyDB::Client.new( store_id: 12345, region: "us-west-2", token_provider: StatelyDB::Common::Auth::AuthTokenProvider.new( access_key: "my-access-key" ) ) ``` ### Python ``` # The StatelyDB SDK from statelydb import init_server_auth # Import from the package that you generated with `stately generate`. # We've used a relative import here for convenience but for you it might # be different. from .schema import Client token_provider, stopper = init_server_auth( access_key="my-access-key", ) client = Client( store_id=12345, region="us-west-2", token_provider=token_provider, token_provider_stopper=stopper, ) ``` ### TypeScript ``` // The StatelyDB SDK // This is the code you generated from schema // Create a Data client for interacting with a Store const client = createClient( 12345, // Store ID { region: "us-west-2", authTokenProvider: accessKeyAuth({ accessKey: "my-access-key", }), }, ); ``` # Connecting to a Bring Your Own Cloud (BYOC) deployment If you’re [deploying the StatelyDB data plane yourself (i.e. BYOC mode)](/deployment/byoc), you need to specify the endpoint to talk to your sidecar, and disable authentication: ### Go ``` package main import ( "context" // The StatelyDB SDK "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) // Create a client for interacting with a Store. func makeClientBYOC() stately.Client { // Use the generated "NewClient" func in your schema package. return schema.NewClient( context.TODO(), 12345, // Store ID &stately.Options{ Endpoint: "http://localhost:3030", NoAuth: true, }, ) } ``` ### Ruby ``` # This is the code you generated from schema require_relative "./schema/stately" client = StatelyDB::Client.new( store_id: 12345, endpoint: "http://localhost:3030", no_auth: true, ) ``` ### Python ``` # The StatelyDB SDK # Import from the package that you generated with `stately generate`. # We've used a relative import here for convenience but for you it might # be different. from .schema import Client client = Client( store_id=12345, endpoint="http://localhost:3030", no_auth=True, ) ``` ### TypeScript ``` // The StatelyDB SDK // This is the code you generated from schema // Create a Data client for interacting with a Store const client = createClient( 12345, // Store ID { endpoint: "http://localhost:3030", noAuth: true, }, ); ``` ## Save and Access Items (guides/using-apis) ``` ``` [Stately’s API](/api/put) offers a small handful of simple methods for creating and accessing your Items. ## Prerequisites 1. [Create a Stately account and a Store](/guides/getting-started#create-a-stately-cloud-account) 2. [Install an SDK for your language](/guides/getting-started#install-an-sdk) 3. [Define a schema and generate language-specific code](/guides/schema) 4. [Set up a client to talk to your Store](/guides/connect) ## Saving Items with Put Using the simple User item type from [Define a Schema](/guides/schema), we can quickly create new Users and [Put](/api/put) them into our Store: ### Go ``` package main import ( "context" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func CreateUser( ctx context.Context, client stately.Client, ) (uuid.UUID, error) { user, err := client.Put(ctx, &schema.User{ // No Id is needed, it will be auto-generated by Stately DisplayName: "Stately Support", Email: "support@stately.cloud", LastLoginDate: time.Now(), NumLogins: 1, }) if err != nil { return uuid.Nil, err } userID := user.(*schema.User).Id return userID, nil } ``` ### Ruby ``` def create_user(client) item = client.put(StatelyDB::Types::User.new( # No id is needed, it will be auto-generated by Stately displayName: "Stately Support", email: "support@stately.cloud", lastLoginDate: Time::now.to_i, numLogins: 1, )) return item.id end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import key_path from .schema import Client, User async def create_user(client: Client) -> None: item = await client.put( User( # No id is needed, it will be auto-generated by Stately displayName="Stately Support", email="support@stately.cloud", lastLoginDate=time.time(), numLogins=1, ) ) return item.id ``` ### TypeScript ``` import { createClient, DatabaseClient, Movie, User, } from "./schema/index.js"; async function createUser(client: DatabaseClient) { const item = await client.put( client.create("User", { // No id is needed, it will be auto-generated by Stately displayName: "Stately Support", email: "support@stately.cloud", lastLoginDate: BigInt(Date.now() / 1000), numLogins: 1n, }), ); return item.id; } ``` Our generated code contains typed objects for the User item type, so we can directly create that object and then call `client.put` to save it. The returned value is the version of the item that was saved in the database, including fields that StatelyDB filled in, such as the User’s new `id`. ## Retrieving Items with Get We can [Get](/api/get) the user back using its key path: ### Go ``` package main import ( "context" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func GetUser( ctx context.Context, client stately.Client, userID []byte, ) (*schema.User, error) { item, err := client.Get( ctx, "/usr-"+stately.ToKeyID(userID), ) if err != nil { return nil, err } return item.(*schema.User), nil } ``` ### Ruby ``` def get_user(client, user_id) user = client.get(StatelyDB::KeyPath.with('usr', user_id)) return user end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import key_path from .schema import Client, User async def get_user(client: Client, user_id: UUID) -> User | None: return await client.get(User, key_path("/usr-{id}", id=user_id)) ``` ### TypeScript ``` import { createClient, DatabaseClient, Movie, User, } from "./schema/index.js"; async function getUser( client: DatabaseClient, userID: Uint8Array, ): Promise { const user = await client.get("User", keyPath`/usr-${userID}`); return user; } ``` ## More APIs `Put` and `Get` are the two most basic APIs - see [Delete](/api/delete), [List](/api/list), and [Transactions](/api/transaction) for more things you can do. # Concepts ## Organizations (concepts/organizations) Organizations are a top-level container for [Stores](/concepts/stores) and [Schemas](/concepts/schema). Organizations have Members which are real people, and Access Keys for programmatic access. Real people can be members of multiple Organizations, while Access Keys are bound to a single Organization. Any member or access key of an organization can access any Store or Schema in that Organization. When you sign in to the [Stately Console](https://console.stately.cloud) for the first time, we give you your own Organization, but we can create new Organizations for you and your teammates. Just contact us at to request a new Organization or for members to be added or removed. ## Stores (concepts/stores) All of your StatelyDB data lives in a Store. You can think of a Store as a container for all the data related to a particular application or project. It’s similar to a “database” in other database applications. Stores can contain multiple types of data - they are not organized into separate “tables” or “collections”. You should generally have one store per application, which holds all of the different kinds of data that application needs. You can have more than one Store in your Organization. You can have one Store for your development environment, another for your production environment, and yet another for your testing environment. Each Store can have its own set of data. Each Store has a [Schema](/concepts/schema) associated with it, and multiple Stores can all share the same Schema. The Schema defines what Item Types are valid to have in the Store. ## Schema (concepts/schema) A Schema defines all of the different [Item Types](/schema/builder), [Object Types](/schema/data-types#objects), and [Enums](/schema/data-types#enums) that make up your data model. The definition for these lives in your code repository as TypeScript files, but you must publish your schema to StatelyDB so your Stores can use them. A Schema lives in your [Organization](/concepts/organizations), and you can have multiple Schemas. Each Schema may be bound to one or more [stores](/concepts/stores) - for example, you can share one schema between your development and production Store. Schemas have multiple versions. This one of the big differences between Elastic Schema and traditional schemas. In other databases, if they have a schema at all, there is only one active definition for the database’s schema. In contrast, in StatelyDB *all* versions of your schema are usable at once—older clients built from earlier schema versions see the database in terms of that earlier schema version, and clients using newer or older schema versions can read and write the same data. Each schema can be versioned by modifying the current schema version and describing the changes using [migration commands](/schema/migrations). This allows you to evolve your schema over time without breaking existing clients. ## Items (concepts/items) In StatelyDB data is stored in units called Items. An Item is the basic building block of your data, similar to a row in a relational database or a document in a document-based database. Each Item Type is modeled in a [Schema](/concepts/schema). Items have defined fields with specific data types, and can have nested objects and lists. Your [Store](/concepts/stores) can have many different Item Types. Items may also have a [heirarchical relationship](/concepts/keypaths#child-items-building-relationships), where a single parent Item contains multiple children that are independent Items. Let’s look at an example of a Customer profile in an e-commerce app: ``` import { itemType, string, uuid, timestampMilliseconds, } from "@stately-cloud/schema"; itemType("Customer", { keyPath: "/cust-:id", fields: { name: { type: string }, id: { type: uuid, initialValue: "uuid" }, address: { type: string }, registeredAt: { type: timestampMilliseconds, fromMetadata: "createdAtTime", }, updatedAt: { type: timestampMilliseconds, fromMetadata: "lastModifiedAtTime", }, }, }); ``` This schema defines a Customer item type that stores some basic information. There are a number of fields such as `id` and `address`, with specific data types `uuid` and `string`. Notice how you can easily reference automatically-tracked metadata directly in the Item. [Defining Item Types](/schema/builder) goes over how Items are described in more detail. ## Key Paths (concepts/keypaths) Every [Item](/concepts/items) has one or more Key Paths. A Key Path is like an address for your Item – it tells StatelyDB exactly where to find (or where to put) the item. Key Paths are crucial for efficiently organizing and retrieving your data—they play a similar role to primary keys and indexes in other database systems, and can be used to easily fetch many items of different kinds with a single [`List`](/api/list) operation. Key Paths are composed of one or more segments, each in the format `namespace-ID`. They’re designed to be similar to file paths or URLs, which makes them intuitive to work with. In the [`Customer` example on the previous page](/concepts/items) has a key path of `/cust-:id`. An `Order` Item might have a key path that incorporates the `Customer` it belongs to: ![Diagram of a key path, with the segments highlighted as "key segment", and the namespace and ID segments also called out](/_astro/keypath.C8BqMeIc_ZNonwv.svg) An individual `Order` would fill in this “key path template” with real values, so it might look something like `/cust-1234/order-6`. The **namespace** portion of each key path segment helps you distinguish between different types of items, but they don’t need to be the same as the item type name. For example, if your Store has two item types, `Customer` and `Order`, which both have UUID `id` fields, you might choose the key paths `/cust-:id` and `/ord-:id`. If you didn’t have the namespace, the two keys would collide and you wouldn’t be able to tell which was a `Customer` and which was an `Order` just by looking at them. Note that key paths make up part of the storage of items, so it can be helpful to choose short namespaces. The **ID** portion of each key path segment may be a string, number, or binary data (base64-encoded). An example of binary data is a [UUID](/schema/data-types#uuids) - Stately stores standard 128-bit UUIDs as 16-byte binary instead of the 36-byte string format, to save space. #### Example Key Paths | Key Path | Item Type | ID | | ------------------------------------ | ---------- | --------------- | | `/cust-p05olqcFSfC6zOqEojxT9g` | `Customer` | `uuid` (Base64) | | `/ord-2` | `Order` | `uint64` | | `/cust-p05olqcFSfC6zOqEojxT9g/ord-2` | `Order` | `uint64` | | `/li-B0881XTCR3` | `LineItem` | `string` | Key Paths provide a lot of flexibility in how to structure your data. You can use them to create hierarchies, represent relationships between different types of data, and more. ## Child Items: Building Relationships Key Paths allow us to create parent-child relationships between Items by adding multiple segments, as shown above in the `Order` example. We call Items that are nested under other Items “Child Items”. Child Items are particularly useful when you have data that logically belongs together, but you want to be able to access or modify individual parts independently. For example, you can update or append individual child Items instead of updating one huge Item (like adding an `Order` to a `Customer`). Or you might want to retrieve only a subset of child items (like listing out only the Orders from the last month). #### Example Imagine you want to store a customer’s orders as Child Items of their customer profile. Here’s how that might look: ``` itemType("Order", { keyPath: [ "/order-:id", "/customer-:id/order-:id", ], fields: { id: { type: uint, initialValue: "sequence" }, customerId: { type: uuid }, zipCode: { type: string }, created: { type: timestampMilliseconds, fromMetadata: "createdAtTime", }, }, }); ``` You can continue to nest more Items as children of these child Items. For example, you could store individual line Items for the order as child Items with nested key paths: ``` itemType("LineItem", { keyPath: "/cust-:customerId/ord-:orderId/li-:upc", fields: { upc: { type: string }, customerId: { type: uuid }, orderId: { type: uuid }, name: { type: string }, quantity: { type: uint }, created: { type: timestampMilliseconds, fromMetadata: "createdAtTime", }, }, }); ``` With this structure, you can access all orders and their line items for a specific customer in a single operation, by listing all Items that start with `/cust-p05olqcFSfC6zOqEojxT9g/order` (assuming the customer ID is `p05olqcFSfC6zOqEojxT9g`). ## Multiple Key Paths Each Item Type can have multiple key path templates. This works much like an index in other databases—each key path gives you an opportunity to access the data in a different way. StatelyDB stores a separate copy of the Item under each key path, and keeps them in sync automatically. For example, look back at our `Order` Item Type: ``` itemType("Order", { keyPath: [ "/order-:id", "/customer-:id/order-:id", ], fields: { // ... } }); ``` Because `Order` has two key paths, you can `Get` an order with only its ID using the `/order-:id` key path, or `List` all orders for a customer using the `/customer-:id/order-:id` key path. ### Optional Key Paths StatelyDB also offers what would be considered sparse indexes in other databases. Key Paths are conditionally populated based on field presence. In other words, if any key path contains an unset field, the item (if any) at that key path location will be removed. Suppose the `Order` Item Type also has an optional `address` field, you could create a key path template like `/cust-:customerId/ord-:orderId/address-:address`. This would only create key paths for orders that have an address, allowing you to list all orders for a customer that have an address. Additionally, if a specific order’s address is removed, the address key path will be removed as well. ## Group Keys The first segment of a key path is special - it is called the Group Key. Read about [Groups](/concepts/groups) for more info on why that’s important. ## Learn More There’s more information about how to choose and structure your key paths in the Schema documentation: [Key Paths](/schema/keypaths). ## Groups (concepts/groups) The first segment in a [key path](/concepts/keypaths) is special—we refer to it as the “group key”. All the [Items](/concepts/items) that share the same group key will be partitioned together in the database, and all items that share the same group key is called a **Group**. If you’re familiar with DynamoDB you may know this as the “partition key”. In the e-commerce example from [Key Paths](/concepts/keypaths), the Group Key is `/cust-p05olqcFSfC6zOqEojxT9g` and all the Items for that customer - `Customer`, `Order`s, and `LineItem`s - are in the same Group. ![Diagram showing the first segment of a key path labeled "group key"](/_astro/groupkey.DShDWjRr_Z1GXbsz.svg) You generally want to spread your data out amongst many Groups, since different Groups can be distributed to different places in the database, allowing them to scale independently. Fortunately, most problems naturally have some grouping of data, such as by user. While StatelyDB supports cross-group operations in [transactions](/api/transaction) or batch operations StatelyDB is optimized for work within a single group. ## Group Versions: Tracking Changes Each Group has a *version* associated with it. This is a number that automatically increases by one for every update performed in the group regardless of the number of Items that change. On the first write to a Group, the Group’s version is `1`. On a subsequent batch put of two additional Items, the Group’s version is `2`, though there are three total Items in the group. If the first Item is then deleted, the version becomes `3`. And lastly, if the third Item is modified, the version becomes `4`. For groups that support Versions (by default this is all groups), StatelyDB maintains `CreatedAtVersion` and `LastModifiedVersion` metadata for the Items within a group. Following from the example above the first Item would have `CreatedAtVersion = 1, LastModifiedVersion = 1` (before it is deleted) the second Item would have `CreatedAtVersion = 2, LastModifiedVersion = 2`, and the third item would have `CreatedAtVersion = 2, LastModifiedVersion = 4`. You can read this metadata by [mapping it to fields](/schema/fields#metadata-fields). Group Versions are a powerful feature that allows you to track changes within a group of Items. This allows you to understand the order in which items were modified or created within the same group. They are also what makes [SyncList](/api/sync) work! ## Listing All Groups You can use the [Scan](/api/scan) API to list out all item types of a type, including item types that represent top level groups. Because of the way StatelyDB distributes Groups amongst partitions within the database (and eventually, across different databases around the world), listing all items can be slow and/or expensive since it needs to read the whole database. Most applications will not need to perform an operation like this if the Group Key is chosen well - for example it is not frequently necessary to list out all users of an application in one place. The Scan API is there if you do need this capability. # Schema ## Defining Item Types (schema/builder) Stately [schemas](/concepts/schema) are TypeScript code that’s checked in to your code repository alongside your service code. You write your schema in TypeScript even if you’ll be using a Go, Ruby, or Python client. You write your schema using helper methods imported from the [`@stately-cloud/schema`](https://www.npmjs.com/package/@stately-cloud/schema) package. We recommend opening your schema directory in [VSCode](https://code.visualstudio.com) since it has built-in support for TypeScript. Make sure to follow the [Define a Schema](/guides/schema) to get set up with the tools and configuration to build schemas. This document is a reference for how the schema builder API works, and what options you have for defining schema. ## Declaring Types Your schema will consist of as many different type declarations as you want. Each of these is constructed through one of the type builders in the `@stately-cloud/schema` package, such as `itemType`, [`objectType`](/schema/data-types/#objects), [`enumType`](/schema/data-types/#enums), or [`type`](/schema/data-types/#custom-scalars). Since your schema is defined using regular TypeScript, you can use variables, shared constants, functions, even loops (please be reasonable). This also means you can break up your schema into multiple files and import them all together [using JavaScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) however you like. The only thing to keep in mind is that the CLI will load a single file to find all your types, so you should have a top level `schema.ts` or `index.ts` that imports all your other files. For example, if you’ve made a `user.ts`, `address.ts`, and `orders.ts` that each have some types in them, you could have an `index.ts` that looks like: index.ts ``` ``` Notice that the file extensions are `.js`, not `.ts`. That’s because TypeScript gets translated to JavaScript before being run. It’s weird, we know! Here’s an example type declaration (the ”…” is a lot of omitted code): ``` itemType("User", { ... }); ``` The type builder function `itemType` is configuring a new type. The name for the new type is the first argument to the `itemType` function. # Assigning Types to Variables Unlike `itemType`s, custom types like [`objectType`](/schema/data-types/#objects), [`enumType`](/schema/data-types/#enums), or [`type`](/schema/data-types/#custom-scalars) are used to declare reusable types that you can use as fields of other types. This means we need to assign them to a variable so we can reference them in [fields](/schema/fields). ``` const Address = objectType("Address", { ... }); // Now we can use Address as a field of User itemType("User", { fields: { ... address: { type: Address }, ... } }) ``` Here we assign a type to the variable `const Address` - the name of that variable doesn’t actually matter, but it’s less confusing if you name it the same as the name of the type you’re declaring. There’s an alternate way to declare types, using a function (*not an arrow function*): ``` export function Address() { return objectType("Address", { ... }); } ``` This has the same result, but there’s one big advantage - types declared as functions are resolved lazily, which means you can use a type before it’s declared. This is very helpful when making circular data structures—imagine a `User` has an `Address` but the `Address` also has a list of `Users`—which one needs to be declared first? If you declare at least one of them as a function, it doesn’t matter. ## Item Types Each [Item](/concepts/items) in your Store belongs to an Item Type which defines its shape and how it is stored. Item Types are declared using the `itemType` function: ``` itemType("User", { keyPath: ["/user-:id"], fields: { id: { type: uuid, initialValue: "uuid", }, // more fields... }, }); ``` Each Item Type has the following required properties: 1. A name, which is the first argument to the `itemType` function. Item Type names must be unique within the schema, and by convention are CamelCase. 2. At least one [`keyPath`](/concepts/keypaths) which provides an address to store the Item at. 3. [`fields`](/schema/fields) that define all the fields of the item type. This is an object where each property is the name of a field, and the value configures that field. Field names are by convention camelCase. Along with some optional properties: 4. `ttl` ([Time To Live](/schema/ttl)) - A way to define how long an Item should exist before being automatically deleted from the store. 5. `indexes` - A list of group-local indexes that allow for different ways to [list](/api/list) items in the same [Group](/concepts/groups). ## Documenting Schema You can document your types and their fields using regular JSDoc comments (`/** */`). We analyze your source code to automatically bring these comments along in your generated code. Normal comments (`//` or `/* */`) are ignored. Note that if you stray from the patterns above and generate your types *dynamically*, your documentation might not make it through the process. ``` /** * This type represents a user in my system. */ itemType("User", { keyPath: ["/user-:id"], fields: { /** This field is documented */ id: { type: uuid, initialValue: "uuid", }, // more fields... }, }); ``` ## Fields (schema/fields) Every field of an Item (or [object](/schema/data-types#objects)) type requires at least: 1. A name, which is used in generated code and can be referenced in [key path](/schema/keypaths) templates. 2. A [`type`](/schema/data-types) which determines what kind of values can go in the field, how the field is stored, and what type the generated code will use for it. Besides that, there are some optional properties: 1. `required`: By default, **all fields are required**, meaning it must be set or the item is invalid. The definition of “set” is a value other than the [zero value](/schema/data-types#zero-values) for that type. Set `required: false` to allow a field to be unset. 2. `valid`: A validation expression for the field, expressed in the [CEL](https://github.com/bufbuild/protovalidate/blob/main/docs/cel.md) language. This is *in addition* to any validation inherent in the data type itself. 3. `deprecated`: This will cause a deprecation annotation to be added to the generated code. 4. [`fromMetadata`](#metadata-fields) which populates the field from Item metadata. 5. [`initialValue`](#initial-value-fields) which automatically sets the field value when the Item is created. 6. [`readDefault`](#read-default): A default value to use when reading the field from the database. For example, here’s a `User` item type with a few fields: ``` itemType("User", { keyPath: "/user-:id", fields: { id: { type: uuid, initialValue: "uuid", }, displayName: { type: string, required: false, // it's OK not to set a name }, email: { type: string, valid: 'this.matches("[^@]+@[^@]+")', }, lastLoginDate: { type: timestampSeconds, }, numLogins: { type: uint, }, }, }); ``` ## Initial Value Fields There are times when you want your database to choose a value for you. One common pattern is picking a unique identifier for an Item. For example, in the relational database world this is often an auto-incrementing integer (eg: `AUTO_INCREMENT` in MySQL or `SERIAL`/`SEQUENCE` in Postgres). When StatelyDB chooses an identifier via `initialValue`, it *guarantees* that no Item already exists with the same key path. It is an error to mark a field as having an `initialValue` when that field is not referenced from any of the Item Type’s key paths. You can specify the `initialValue` property when configuring a field, with the following variants: ### uuid `initialValue: 'uuid'` produces a globally unique UUIDv4. UUIDs are a good choice for when you need an ID to be globally unique no matter where the ID is in its key path. The type of the field must also be [`uuid`](/schema/data-types#uuids). ### rand53 `initialValue: 'rand53'` generates an unsigned random 53-bit integer. 53-bit integers are the [maximum safe integer size in Javascript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER), so this is a good choice if you are primarily using StatelyDB in a Javascript environment and want maximum compatibility. It is also a 50% smaller alternative to UUIDs that can still be unique within your Store. The type of the field must be a `uint`. Note that unlike UUIDs, `rand53` values may repeat in different key paths—for example you might have `/user-1234` and `/post-1234` and `/post-6789/comment-1234`. The only thing that matters here is that the entire key path is unique. ### sequence `initialValue: 'sequence'` generates an unsigned, monotonically increasing integer starting from 1, where the counter is unique per parent Key Path. This is ideal for when data should have a consistent order based on insertion order, such as a list of messages within a conversation. The type of the field must be an integer type, ideally `uint`. This can only be used for items within a [Group](/concepts/groups)—it cannot be used in a Group Key. For example, if we have the key path template `/customer-:customerID/order-:orderId` for the Order Item type. `orderId` can be a `sequence`, and each order ID for a customer will count up 1, 2, 3, and so on. You can have another key path template for a LineItem that looks like `/customer-:customerId/order-:orderId/li-:lineItemId` and `lineItemId` can be a `sequence`, and each line item *within an Order* will count up 1, 2, 3, and so on. In this setup, the third line item for the fifth order a Customer makes would have the key path `/customer-12957/order-5/li-3`. However, `customerId` cannot be a `sequence`, because it is a Group Key. If we allowed Group Keys to be `sequence`s, then Groups would no longer be well-partitioned, which could impact scaling. ## Metadata Fields Every Item in StatelyDB automatically tracks the following metadata: * `createdAtTime` - The timestamp in microseconds when the Item was created. * `lastModifiedAtTime` - The timestamp in microseconds when the Item was last modified. * `createdAtVersion` - The [Group Version](/concepts/groups#group-versions-tracking-changes) of the Item’s [Group](/concepts/groups) minted when the Item was created. * `lastModifiedAtVersion` - The [Group Version](/concepts/groups#group-versions-tracking-changes) of the Item’s [Group](/concepts/groups) minted when the Item was last modified. * `ttl` - If the Item Type has a [TTL](/schema/ttl), this is the effective timestamp when the Item will expire. By default these fields are not exposed. In order to use these fields you need to define a field in the Item type definition in your schema, and map it to one of these metadata with `fromMetadata`. Any field with `fromMetadata` is read-only and any values written to it will be ignored. > **Note:** Metadata fields still require the correct type so that we know how to present the underlying data. For instance, it is necessary to choose the granularity of timestamp desired to access any of the timestamp-metadata fields. Here is an example of a model that uses all of these metadata fields: ``` import { itemType, timestampMicroseconds, timestampSeconds, uint, uuid, } from "@stately-cloud/schema"; itemType("MyItemType", { keyPath: "/myitemtype-:id", fields: { id: { type: uuid, }, creationTime: { type: timestampSeconds, fromMetadata: "createdAtTime", }, modifiedTime: { type: timestampMicroseconds, fromMetadata: "lastModifiedAtTime", }, createdAtVersion: { type: uint, fromMetadata: "createdAtVersion", }, modifiedAtVersion: { type: uint, fromMetadata: "lastModifiedAtVersion", }, }, }); ``` ## Read Default Certain [migration commands](/schema/migrations) will require you to set a `readDefault` on a field depending on the specific migration action and the configuration of the field. For example, if you add a new required field (or make an optional field required), existing data in the store may not have a value for that field, so you must specify a `readDefault` to fill in a value on read for new clients. A field’s `readDefault` can be specified as a JSON string or an instance of the field’s type. For object types you can also use a JS object. Data type specific values are also supported, such as duration strings for time-based fields. Read defaults must conform to any `valid` expressions on the field. Here are some examples of `readDefault`s: ``` import { itemType, uuid, objectType, string, timestampSeconds, } from "@stately-cloud/schema"; const Profile = objectType("Profile", { fields: { name: { type: string }, }, }); itemType("MyItemType", { keyPath: "/myitemtype-:id", fields: { id: { type: uuid }, profile: { type: Profile, readDefault: { name: "John Doe" } }, // The string `{ name: "John Doe" }` would also be accepted membershipDate: { type: timestampSeconds, readDefault: 1693526400 }, membershipExpiry: { type: timestampSeconds, readDefault: "1y" }, // duration strings are supported }, }); ``` *** ## Data Types (schema/data-types) The Schema builder allows you to build up your own complex types by composing other data types, either by using types provided by the library or constructing your own. ## Scalars Scalar fields are the most basic of the data types used in StatelyDB. These data types get mapped into the closest equivalent in the language you’re using. Some of these data types have built-in data validation. Each of these can be imported from `@stately-cloud/schema` and used as the `type` of a field. | Type | Description | | ----------------------- | ---------------------------------------------------------------------------- | | `bool` | A boolean indicating true or false | | `string` | A UTF-8 encoded string | | `int` | A signed integer up to 64 bits | | `uint` | An unsigned (i.e. always positive) integer up to 64 bits | | `int32` | A signed integer limited to 32 bits | | `uint32` | An unsigned integer limited to 32 bits | | `double` | A 64-bit floating point value | | `float` | A 32-bit floating point value | | `bytes` | An arbitrarily sized binary blob | | `uuid` | 16 bytes representing a [UUID](/schema/data-types/#uuids) | | `durationMilliseconds` | A signed integer that indicates a duration of time in milliseconds | | `durationSeconds` | A signed integer that indicates a duration of time in seconds | | `timestampMicroseconds` | A signed integer that indicates a timestamp since unix epoch in microseconds | | `timestampMilliseconds` | A signed integer that indicates a timestamp since unix epoch in milliseconds | | `timestampSeconds` | A signed integer that indicates a timestamp since unix epoch in seconds | | `url` | A string that enforces and validates it is a URL structure | We’ll add more standard types to the schema builder library as time goes on. ## Custom Scalars StatelyDB schema supports creating your own custom type aliases for cases where you want to use a consistent reference in multiple places. For example, you might want to create a type alias for an identifier that is referenced by multiple Item types. Another example would be re-using a common type with a validation rule, like the format of an email address. ``` // An ID of the form MATH-403 PHYS-301, etc. export const CourseID = type("CourseID", string, { valid: 'this.matches("[A-Z]{4}\-[0-9]{3}")', }); const Email = type("Email", string, { valid: 'this.matches("[^@]+@[^@]+")', }); export const StudentID = type("StudentID", uint) ``` The `CourseID` and `Email` types above define their own validation rules using a [CEL expression](https://github.com/bufbuild/protovalidate/blob/main/docs/cel.md) and can be used in place of a raw `string` that might otherwise be needed. The `StudentID` is just a nice named alias for a `uint` to make your schema self-documenting. ## Enums StatelyDB supports defining Enum types that provide a simple mapping of names to numerical values. ``` const Quarter = enumType("Quarter", { Autumn: 1, Winter: 2, Spring: 3, Summer: 4 }); ``` It’s recommended to start your enums at 1, not 0. The schema builder will automatically add a 0 value named “UNSET” if one has not already been specified. This is important because 0 is the “zero value” for an enum—if you had a real value at 0, you couldn’t tell the difference between a field of that type being unset or set to the zero value. See [Zero Values](#zero-values) for more details. > **Note**: When a field with an Enum type is referenced in a key path template, the key path will use the Enum’s number value, not its string value. ## Arrays The `arrayOf` function can take any other type and turn it into an array (ordered list) of that type. Other container types like `mapOf`, `setOf`, etc. are on the roadmap. ``` const StudentList = arrayOf(StudentID); ``` Currently it is not possible to have arrays of [object types](/schema/data-types/#objects) that expose [metadata fields](/schema/fields/#metadata-fields), as that’s a sign that you probably want to model the array as separate items under a child key path rather than an array. ## Objects Object types allow you to create more complex composite types that can be reused across Item types. An Object type definition in schema looks similar to an Item type, but without a Key Path and without support for attributes like TTLs. You also use the `objectType` builder function instead of `itemType`. Object types can be used as a field type in other Object types and Item types, while Item types *cannot* be used as a field type in another type. The following example shows adding an Object type of `ContactInfo` that contains four fields. The new `ContactInfo` Object type is then referenced by `Student` and `Instructor`. Object types provide a powerful way to define reusable types that be composed together. ``` import { itemType, objectType, string, uint, arrayOf } from "@stately-cloud/schema"; const ContactInfo = objectType("ContactInfo", { fields: { firstName: { type: string }, lastName: { type: string }, email: { type: string }, phoneNumber: { type: string }, } }); itemType("Student", { keyPath: [ "/student-:studentId", "/classof-:graduatingYear/student-:studentId" ], fields: { studentId: { type: uint }, graduatingYear: { type: uint }, contactInfo: { type: ContactInfo }, emergencyContacts: { type: arrayOf(ContactInfo) }, // other student-specific info such as majors, etc. }, }); itemType("Instructor", { keyPath: "/instructor-:instructorId", fields: { instructorId: { type: uint }, contactInfo: { type: ContactInfo }, emergencyContacts: { type: arrayOf(ContactInfo) }, // other instructor-specific info such as tenure, colleges, etc }, }); ``` ## Items Of course, [Item types](/schema/builder) are types. But they’re a bit special since they can’t be used as the field type for another item. In other words, Item types can’t be embedded in other Items. In the future, we’ll handle this via relations and pointers, but for now, it’s forbidden. ## Zero Values You may have guessed that Stately’s schema is based on protobuf, and you’d be right! We didn’t want to reinvent the building blocks of an already-great schema system and binary encoding. However this means we’ve consciously inherited some behavior from protobuf that might not be entirely intuitive. An important thing to understand is that every data type has a zero value, and *there is no distinction between an unset value and a zero value*. For example, if you have a `uint` field and don’t set it at all, that field’s value is 0. If you set it to 0, it’s still 0. One of the great properties this gives us is that *zero values take up no storage space*. But it also can be weird because if a field is required (and almost all fields default to required!), it means you cannot have a zero value in that field. This might be easy to remember for numeric types, but some of the other zero values are less intuitive: * The zero value of an array is an empty array. So by default, all your array fields require there to be at least one item in them! This includes `bytes` fields. * The zero value of a `string` is the empty string. * The zero value of a `bool` is `false`, but we won’t even let you set a `bool` to required - what would that even mean? ## UUIDs StatelyDB’s UUIDs require a bit more explanation. You may be familiar with the standard string form for a v4 UUID, like `9edae9a5-fa39-4e45-bfd6-21707067f613`. This is a 36-character representation of what is really a 128-bit (16 byte) value. That’s 20 wasted bytes per UUID, or a 125% overhead! At Stately Cloud we care deeply about storage efficiency and we know that these kinds of things add up. That’s why we always store UUIDs as 16-byte arrays, and even when we convert them to strings (for example, in key paths), we base64-encode the original byte array into a 22-character string instead of using the standard string format (that’s only 38% overhead, and that’s only while the key path is on the wire—we still store it in binary). The downside of this obsession with efficiency is that in your client code, you might end up with a `Uint8Array` (JavaScript) or `[]byte` (Go) which is more annoying to work with than a string. We don’t like this, and our roadmap includes fixing this up, but for now it’s good to be aware of it. There are libraries you can use to translate between bytes and UUIDs in the meantime. ## Key Paths (schema/keypaths) Each [item type](/schema/builder) must have at least one [key path](/concepts/keypaths) that determines where it is stored and how you can access it. The first Key Path is the *primary* Key Path; additional Key Paths are *aliases*. * A key path has one or more “segments”, which are in the form `/namespace-:field`. Note that each segment starts with `/`. For example, `/course-:courseId/year-:academicYear/quarter-:academicQuarter` has three segments. * Each segment consists of a namespace and an ID. In a key path template, the ID is a field reference. * The namespace can be are any combination of letters or underscores (no hyphens, numbers, or other special characters). For example `course`, `year`, and `quarter` are all namespaces. * When defining key paths for an item type, the ID is a field reference which will be replaced with the value of a field. The field references are the name of a field in your item type preceded by a colon (`:`). For example `:courseId`, `:academicYear`, and `:academicQuarter`. * The ID is optional on the last segment of a key path (but must be included in the first segment). For example, a key path of `/course-:courseId/syllabus` could be used for an item that you only have one of per `course`. * The first segment in a key path is also called the [group key](/concepts/groups) and it determines how the item is partitioned in the store. Every item’s key path must contain at least one segment so it can be located on a partition. Let’s see how this looks with example Item types `Student` and `Course`: ``` itemType("Course", { keyPath: [ "/course-:courseId/year-:academicYear/quarter-:academicQuarter", ], fields: { courseId: {type: CourseID}, academicYear: {type: uint}, academicQuarter: {type: Quarter}, courseName: {type: string}, description: {type: string}, instructorIds: {type: arrayOf(uint)}, // ...any other information related to a course. }, }); itemType("Student", { keyPath: [ "/student-:studentId", ], fields: { studentId: {type: uint}, // ...more fields such as phone number, emergency contact }, }); ``` Using the schema above, we can retrieve the `Student` Item with `studentId` of `1234` by [getting](/api/get) `/student-1234`. We can also fetch information about a particular course in a particular quarter of a particular year using a complete keypath such as `/course-MATH321/year-2023/quarter-1`, which will fetch the Item for Math-321 in the Autumn quarter of 2023. But we can also fetch all occurrences of a given course across all years and quarters via a [List](/api/list) operation with prefix `/course-MATH321` or all offerings of a course in a given year via a List with prefix `/course-MATH321/year-2023`. ## Multiple Key Paths Let’s continue this example to answer the question below: > Why would I want more than one Key Path (an alias Key Path)? Defining more than one Key Path for an Item is a way to make it possible to access those Items using different fields. In this way, it’s like an index in other databases, though unlike a traditional database, you can also access *multiple* items at once by Listing with a key path prefix. StatelyDB guarantees that all Key Paths for a single Item are put, updated or deleted atomically with any change to the Item. Consider the relationship between a student and a class: a class may have many students and a student may be a member of many classes. Our application requires that we are able to answer the questions “Which classes is a student taking?” and “Which students are taking a given class?” Let’s build on the previous example and introduce a third Item type that will act as the glue between Students and Courses — the `EnrolledStudent`: ``` itemType("EnrolledStudent", { keyPath: [ "/course-:courseId/year-:year/quarter-:quarter/student-:studentId", "/student-:studentId/year-:year/quarter-:quarter/course-:courseId", ], fields: { courseId: {type: CourseID}, year: {type: uint}, quarter: {type: Quarter}, studentId: {type: uint}, // information pertinent to the student's enrollment in the course, // such as payment status, attendance, exam scores, etc. }, }); ``` [Putting](/api/put) an `EnrolledStudent` Item will result in two records. One under the Student’s \[Group] and one under the Course’s \[Group]. This allows us to answer the questions above: * **“Which classes is a student taking?”** - A List with prefix `/student-123` will give us all the courses a student has ever participated in. * **“Which students are taking a given class?”** - A List with prefix `/course-PHYS341/year-2019` will return all the instances `PHYS341` was offered in `2019` as well as all students who enrolled in those courses. Beyond those questions, we can exploit the structure of these key paths to answer more specific questions. Listing with a prefix of `/student-123/year-2019/quarter-3` will return only the courses student `123` participated in the spring quarter of 2019. Again, you can think of these as a sort of index: * `/course-:courseId/year-:year/quarter-:quarter/student-:studentId` is similar to `index EnrolledStudent on (courseId, year, quarter, studentId)` * `/student-:studentId/year-:year/quarter-:quarter/course-:courseId` is similar to \`index EnrolledStudent on (studentId, year, quarter, courseId) The wild part is that all Items share the same Key Path space. Since we also have key paths of `/course-:courseId/year-:year/quarter-:quarter` (for Course) and `/student-:studentId` (for Student), a single List operation can pick up Students, EnrolledStudents, and Courses all at once. ## Key IDs in Nested Object Type Fields In certain scenarios, it is advantageous to reference key IDs within nested [Object Type](/schema/data-types#objects) fields. This is particularly useful when Object Type fields contain uniquely identifying data that can serve as part of the Key Path. Stately supports this using dot notation: `/itemType-:objectField.nestedField`. Consider the following example, where a `ContactInfo` Object Type is shared between a `BuyerAccount` and `SellerAccount` Item Type. By referencing `ContactInfo` fields in the Key Path, it can be ensured that no two buyers or sellers can create an account with a previously used phone number or email: ``` const ContactInfo = objectType("ContactInfo", { fields: { firstName: { type: string }, lastName: { type: string }, email: { type: string }, phoneNumber: { type: string }, } }); itemType("BuyerAccount", { keyPath: [ "/buyerAccount-:buyerId", "/email-:contactInfo.email", "/phone-:contactInfo.phoneNumber" ], fields: { buyerId: { type: uint }, contactInfo: { type: ContactInfo }, // other buyer-specific info such as payment info, etc. }, }); itemType("SellerAccount", { keyPath: [ "/sellerAccount-:sellerId", "/email-:contactInfo.email", "/phone-:contactInfo.phoneNumber" ], fields: { sellerId: { type: uint }, contactInfo: { type: ContactInfo }, // other seller-specific info such as receiver bank account, etc. }, }); ``` In this example, both the `BuyerAccount` and `SellerAccount` Item Types have key paths that reference the `email` and `phoneNumber` fields within the `ContactInfo` Object Type. This ensures that contact information is unique across each Item Type, providing, consistent, efficient and reusable way to access and manage data. ## Frequently Asked Questions Being able to fetch multiple items using List is the power of an Item type with more than one key path, though it is worth answering a few related questions: > If an Item has more than one Key Path, what happens when I attempt to delete an Item by an ‘alias’ (eg: instead of the primary Key Path)? Since an alias is a way to address an Item, if you attempt to delete any Item by an alias it will delete the Item. StatelyDB maintains consistency of an Item across all of its Key Paths. In our example above a delete issued to `/student-123/year-2019/quarter-3/course-MATH321` will also delete `/course-MATH321/year-2019/quarter-3/student-123` and vice versa. > What happens when a field used to calculate an ‘alias’ changes during an Item update? To better frame this question, let’s continue to examine the example above and imagine there was a typo in the year when writing a `EnrolledStudent` Item. We meant to use the year `2023` but had accidentally typed `223`. Thus we one Item with two Key Paths in our database: 1. `/student-123/year-223/quarter-3/course-MATH321` 2. `/course-MATH321/year-223/quarter-3/student-123` …but want the Key Paths: 1. `/student-123/year-2023/quarter-3/course-MATH321` 2. `/course-MATH321/year-2023/quarter-3/student-123` How can we correct this problem? If we attempt to write a new `EnrolledStudent` with an updated year, the primary Key Path of this Item is different than the original Item’s Key Path and thus the write will create a new item. The way to correct the problem is to write a new Item and then delete the incorrect Item. Let’s modify our schema to make this more interesting by adding a new `/classof-:graduatingYear/student-:studentId` alias to the `Student` Item type: ``` itemType("Student", { keyPath: [ "/student-:studentId", "/classof-:graduatingYear/student-:studentId", ], fields: { studentId: {type: uint}, graduatingYear: {type: uint} // ... more fields }, }); ``` Now we have the Student Item stored at the Key Paths: 1. `/student-123` 2. `/classof-223/student-123` …but we want the Item stored at the Key Paths: 1. `/student-123` 2. `/classof-2023/student-123` How can we correct the problem? Easy! Correct the `graduatingYear` field in the offending `Student` Item and write it back to the store. Since `graduatingYear` is *not* the unique component of the primary Key Path, StatelyDB understands student `123` should only have one `/classof-*/student-123` alias, and will atomically “move” the previous Item `/classof-223/student-123` to `/classof-2023/student-123`. > Is metadata the same between aliased items? *Timestamp* metadata is guaranteed to be consistent across all aliases of an Item. However, *version* metadata may differ depending on the originating Key Path as version is a property of the Group an Item is in. ## Key Path Options You can set options on a key path. For example, you can set [`syncable` to opt out of syncability](/api/sync/#opting-out-of-sync) for a key path (or an entire item type or schema). ``` itemType("Student", { keyPath: [ { path: "/student-:studentId", syncable: false }, "/classof-:graduatingYear/student-:studentId", ], ... }); ``` ## Examples | Key Path Template | Valid? | | --------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- | | `/course-:courseId/year-:academicYear/quarter-:academicQuarter` | ✅ Yes! | | `/classof-:graduatingYear/student-:studentId` | ✅ Yes! | | `/student-:studentId` | ✅ Yes! | | `/courses/course-:courseId/syllabus` | ✅ Yes! | | `/courses` | 🚫 No - this doesn’t have a full first segment (group key) | | `/courses/course-:courseId` | 🚫 No - this also doesn’t have a full first segment (group key) | | `/course-:courseId/years/year-:academicYear` | 🚫 No - the segment in the middle (`years`) doesn’t have an ID | | `/course-:courseId/lecture-notes-:id` | 🚫 No - the namespace `lecture-notes` has a hyphen in it. Only letters and underscores are allowed. | | `/student-studentId` | 🚫 No - the id `studentId` is missing a colon so it isn’t a valid field reference. | ## Time to Live (TTL) (schema/ttl) Items can have a time-to-live (TTL), also known as an expiration time. After expiration time passes, the item will be automatically deleted from the system. TTLs can be configured via the `ttl` attribute of an Item type definition. A TTL may be a specific timestamp field of an Item or can be calculated relative to another field or metadata property of an Item. It is important to note that a TTL value is only guaranteed to be respected to the nearest second, and that a [zero value](/schema/data-types#zero-values) in a field used in the TTL calculation disables the TTL. The examples below demonstrate this in more detail: **An Item with a TTL that expires a constant two hours after its creation:** ``` itemType("MyType", { keyPath: "/mytype-:id", ttl: { source: "fromCreated", durationSeconds: 2*60*60, }, fields: { id: { type: uint } } }) ``` **An Item with a TTL that expires a constant two days after its latest modification:** ``` itemType("MyType", { keyPath: "/mytype-:id", ttl: { source: "fromLastModified", durationSeconds: 2*24*60*60, }, fields: { id: { type: uint } } }) ``` **An Item with a TTL that expires an arbitrary duration (supplied in the `ttlDebounceDuration` field) from its latest modification.** ``` itemType("MyType", { keyPath: "/mytype-:id", ttl: { source: "fromLastModified", field: "ttlDebounceDuration", }, fields: { id: {type: uint}, ttlDebounceDuration: {type: durationMilliseconds} } }) ``` **An Item with an exact (but arbitrary) timestamp TTL.** ``` export const MyType = itemType("MyType", { keyPath: "/mytype-:id", ttl: { source: "atTimestamp", field: "ttlTimestamp", }, fields: { id: {type: uint}, ttlTimestamp: {type: timestampMicroseconds} } }) ``` ## Updating Schema (schema/updating) ## Your First Schema Version After you have updated your schema TypeScript files locally, you must publish it to Stately Cloud before you can use it - until then, the associated Stores will not know about your changes. Using the [CLI](/cli), `stately schema put` publishes a new version of your schema. This will also run your schema TypeScript files. For this to run, you need to have NodeJS installed, and have installed the dependencies for your schema package with `npm install`. The first positional argument is the path to your schema’s index file (you can have schema in as many TypeScript files as you want as long as there’s a single file that exports everything). You can find your Schema ID in the [Console](https://console.stately.cloud) and in the output of [`stately whoami`](/cli#whoami). ``` stately schema put \ --schema-id \ --message "A schema update!" \ ./path/to/schema.ts ``` ## More Schema Versions Your schema is not static—you can update it as your needs change. Unlike some other databases, updating your StatelyDB schema does not immediately change the database for all clients. Instead, each schema version is available for clients to use, and StatelyDB automatically translates between versions to maintain both backwards and forwards compatibility. This means you can update your schema, then update your clients to use the new schema version on your own schedule. Different clients can use different schema versions as long as you’d like, and they’ll all work together. You use `stately schema put` to update your schema, just like when you created your first version. However, for updates to your schema, you’ll also need to write [migrations](/schema/migrations) along with your updated schema to describe what you’ve changed and resolve any ambiguities. The CLI will help point out where you need to add migrations. ## Backwards Compatibility Check When you’re updating your schema, `stately schema put` will fail if you are making a backwards-incompatible change that we can’t handle with [migrations](/schema/migrations) yet. We’re intentionally conservative right now about what counts as a backwards compatible change. You can add the `--allow-backwards-incompatible` argument to override this, but realize **you’re taking the validity of your stored data into your own hands at that point**. Ask us if you’re unsure about whether your change is safe. Eventually we will remove the `--allow-backwards-incompatible` option when migrations can handle all types of changes—in the meantime, it is sometimes necessary to override the backwards compatibility check when you know it’s safe. ## Migrations (schema/migrations) When you [update your schema](/schema/updating), you need to change your schema definition in your TypeScript files, but you also need to specify migrations that describe what changes you’ve made. This helps to double-check that you’ve made the changes you intend to, and also resolves some ambiguities - for example, did you mean to rename a field, or did you add a new field and delete an old field? This is part of what allows StatelyDB’s elastic schema to automatically convert between different schema versions and allow different versions of your clients to coexist. ## Declaring Migrations Whenever you update your schema, you need to declare a migration with the `migrate` function. Just like your schema types, the migration needs to be exported from your schema’s top level JavaScript module, and you can use [modules to organize them into different files](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) however you like. The `migrate` function requires the schema version you’re migrating *from* (they start with 1), a human-readable description, and a function that applies one or more migration commands. You can look up the current version number of your schema on the [console](https://console.stately.cloud). Because these migration declarations have a version associated with them, you don’t need to delete them after they’ve been applied—you can keep them around as a record of changes to your schema, or you can clean them out of your schema files after they’re no longer needed. schema.ts ``` // ... the rest of your schema, other migrations, etc. // Declare a migration from version 1 to version 2 that describes a // bunch of changes to the schema. migrate(1, "An example migration", (m) => { m.addType("NewType"); m.removeType("Url"); m.renameType("Account", "Profile"); m.changeType("Profile", (t) => { // was previously named "Account" t.addField("description"); t.removeField("note"); t.renameField("joinDate", "accountAge"); }); }); ``` ## Migration Commands Migration commands are the individual instructions in each migration declaration that tell StatelyDB how your schema’s shape has changed. When you call `stately schema put`, these commands help StatelyDB understand how to migrate items between versions when reading or writing them. The available commands are: ### AddKeyPath Once you’ve declared an item type with some fields and used it for some time, you may find it useful to add additional Key Paths to enable different access patterns or enforce additional unique constraints on your data. For example, you may want to enforce unique emails among all accounts. We can do this by adding an `/email-:email` key path to the existing type. When a schema containing added key paths is published, StatelyDB kicks off a non-blocking operation to populate the new key paths in all Stores bound to this Schema. If the new key path references a field that was also added in this version, there doesn’t need to be a backfill because no item would have that field set. You are free to use the new schema right away or await for all data to be backfilled. The status of individual store backfills can be observed in the console Schema detail page. ``` itemType("ExistingType", { keyPath: ["/name-:name", "/email-:email"], fields: { name: { type: string }, email: { type: string }, }, }); migrate(2, "Enforce unique emails", (m) => { m.changeType("ExistingType", (i) => { i.addKeyPath("/email-:email"); }); }); ``` ### RemoveKeyPath In some cases you may want to remove a Key Path from an item type such as when you no longer want developers to use it, or when you added it by mistake. In these cases you can run a migration to remove the key path from future versions of the schema. Once removed, StatelyDB will continue to populate the key path to maintain backwards compatibility with older clients. You cannot remove the primary key path of an item type. ``` itemType("ExistingType", { keyPath: ["/name-:name", "/email-:email"], keyPath: "/name-:name", fields: { id: { type: string }, name: { type: string }, }, }); migrate(3, "Remove email key path", (m) => { m.changeType("ExistingType", (i) => { i.removeKeyPath("/email-:email"); }); }); ``` ### AddType You can declare that you’ve added a new [`objectType`, `itemType`, or `enumType`](/schema/builder) by passing the name of your new type to the `addType` migration. This type must exist in your new schema version and not the old version. Clients using this new version can read and write using this new type, while older versions won’t see it. StatelyDB will filter this new type out of [List](/api/list)/[SyncList](/api/sync) results for clients on older versions. ``` itemType("NewType", { keyPath: "/newType-:name", fields: { name: { type: string }, }, }); migrate(4, "An add type example", (m) => { m.addType("NewType"); }); ``` ### RemoveType When you remove a `objectType`, `itemType` or `enumType`, use the `removeType` command and pass the name of the type you have removed. When you remove a type from your schema, newer schema versions will no longer see this type. The items or fields that reference the removed type are still accessible for older versions of the schema but newer versions of the schema cannot interact with them. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, }, }); migrate(5, "A remove type example", (m) => { m.removeType("ExistingType"); }); ``` ### RenameType When you rename a type in your schema, use the `renameType` command to clarify that it’s the same type, and not a new type plus a removed type. If you reference this type in subsequent migration actions, you must use the new name. When you rename a type, the data stored in that type doesn’t change. Clients using older schema versions will use the old name, while clients on newer version will use the new name, but they’re all interacting with the same data. ``` itemType("OldType", { itemType("NewType", { keyPath: "/t-:name", fields: { name: { type: string }, }, }); migrate(6, "A rename type example", (m) => { m.renameType("OldType", "NewType"); }); ``` ### AddField When you add a new field to a type, use the `addField` command within a `changeType` command to declare the new field. Clients using newer schema versions can read and write to this added field, while older versions won’t see this field at all. If a client using an older version updates an item that has added fields, those fields will be untouched. In other words, there is no way to replace the whole object—StatelyDB only updates the fields known to the caller’s schema version. If you want to completely replace an item, make sure to delete it first. Adding [`required`](/schema/fields) fields to an existing type must be accompanied by a `readDefault` to ensure newer clients can read items created by older clients that didn’t know about this field. The provided `readDefault` will be subject to any [`valid`](/schema/fields) expression on the new field. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, name: { type: string, readDefault: "John Doe" }, }, }); migrate(7, "An add field example", (m) => { m.changeType("ExistingType", (i) => { i.addField("name"); }); }); ``` ### RemoveField When you’ve removed a field from a type, use the `removeField` command within a `changeType` command. Clients using older schema versions will still see the field, while newer clients cannot interact with the field but will not disturb any data already saved there. If you want to zero out this removed field for older clients, you will need to delete the item first. Removing [`required`](/schema/fields) fields from an existing type must be accompanied by a `readDefault` in the migration command to ensure older clients can read items created by newer clients that no longer know about this field. The provided `readDefault` will be subject to any [`valid`](/schema/fields) expression on the removed field. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, name: { type: string }, }, }); migrate(8, "A remove field example", (m) => { m.changeType("ExistingType", (i) => { i.removeField("name", "John Doe"); }); }); ``` ### RenameField If you’ve renamed a field in a type, use the `renameField` command in a `changeType` command, to distinguish a rename from an add and a remove. Renamed fields keep the same stored data and only change the name. This means that clients on different schema versions of the schema will read and write the same data, just with different field names. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, oldName: { type: string }, newName: { type: string }, }, }); migrate(9, "A rename field example", (m) => { m.changeType("ExistingType", (i) => { i.renameField("oldName", "newName"); }); }); ``` ### ModifyFieldReadDefault If you’ve changed the default value of a field, by either adding, removing or modifying the [`readDefault`](/schema/fields#read-default) value, use the `modifyFieldReadDefault` command within a `changeType` command. The provided `readDefault` will be subject to any `valid` expression on the field. Note: You can never add or remove a [`required` field’s](/schema/fields) readDefault value, only modify the existing value. This is because required fields must always have a value. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, name: { type: string, readDefault: "John Doe" }, name: { type: string, readDefault: "Jane Doe" }, }, }); migrate(10, "A modify field readDefault example", (m) => { m.changeType("ExistingType", (i) => { i.modifyFieldReadDefault("name"); }); }); ``` ### MarkFieldAsRequired If you’ve changed a field to be [`required`](/schema/fields) use the `markFieldAsRequired` command within a `changeType` command. Additionally, you must provide a [`readDefault`](/schema/fields#read-default) on the field. Newer clients will see this `readDefault` when reading items created by older clients that may have set the [zero value](/schema/data-types/#zero-values) for this field. The provided `readDefault` will be subject to any [`valid`](/schema/fields) expression on the field. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, name: { type: string, required: false }, // required: true is implicit in schema name: { type: string, readDefault: "John Doe" }, }, }); migrate(11, "A required example", (m) => { m.changeType("ExistingType", (i) => { i.markFieldAsRequired("name"); }); }); ``` ### MarkFieldAsNotRequired If you’ve changed a field to not be [`required`](/schema/fields) use the `markFieldAsNotRequired` command with a [`readDefault`](/schema/fields#read-default) within a `changeType` command. Older clients will see the `readDefault` when reading items modified by newer clients that set the [zero value](/schema/data-types/#zero-values). This `readDefault` will be subject to any [`valid`](/schema/fields) expression on the field. ``` itemType("ExistingType", { keyPath: "/et-:id", fields: { id: { type: string }, // required: true is implicit in schema name: { type: string }, name: { type: string, required: false }, }, }); migrate(12, "A not required example", (m) => { m.changeType("ExistingType", (i) => { i.markFieldAsNotRequired("name", "John Doe"); }); }); ``` ## Generating Client Code (schema/generate) Using the [CLI](/cli), `stately schema generate` will run your schema TypeScript files and then create client code in one of our supported SDK languages that contains typed objects corresponding to the types in your schema. For this to run, you need to have NodeJS installed, and have installed the dependencies for your schema package with `npm install` or another package manager. Generating the SDK can be done in one of two modes: preview or release. Preview mode generates the SDK based on your local schema code, which is useful for rapidly iterating on and integrating with the generated code before you’ve published a new schema version with `schema put`. The client will throw an error if you try to actually use it, since the Store doesn’t know about your new schema changes yet. To generate a preview of the SDK, you supply the `--preview ` argument. (you can have schema in as many TypeScript files as you want as long as there’s a single file that exports everything). The positional argument is the output directory where the language-specific code will go: ``` stately schema generate \ --language go --preview ./schema/schema.ts \ ./pkg/schema ``` Release mode generates client code based on a published schema version, and that client can actually talk to your Store. To generate a release of the SDK, you supply the `--schema-id` and `--version` arguments, and the positional argument is the output directory where the language-specific code will go. You can omit `--version` to generate from the latest published version of your schema. ``` stately schema generate \ --language go \ --schema-id \ --version \ ./pkg/schema ``` Each language produces different code that’s tailored to the conventions and capabilities of that language. See Client SDKs in the sidebar for more info. ## AI Schema Co-Pilot (schema/ai-copilot) While StatelyDB’s schema language is easy to use if you already know what you want, it can be helpful to have assistance when deciding how to model a problem in schema. Fortunately, agentic AI assistants like Claude Code and Cursor exist that can help you build and iterate on schema. StatelyDB provides a couple of additional tools to make sure your AI copilot of choice can be most effective. ## StatelyDB MCP Server [`@stately-cloud/statelydb-mcp-server`](https://github.com/StatelyCloud/statelydb-mcp-server) adds tools that your AI copilot can call in order to validate and publish schema on your behalf. First, make sure you’ve already [installed the Stately CLI](/guides/getting-started/#download-the-stately-cli) and NodeJS. Then add the MCP server to your assistant’s configuration. For example, Claude stores its configs in `~/Library/Application Support/Claude/claude_desktop_config.json` on macOS. Your tool might work slightly differently. ``` { "mcpServers": { "statelydb": { "command": "npx", "args": ["-y", "@stately-cloud/statelydb-mcp-server"] } } } ``` Now, you can tell your assistant to do things like “Design an StatelyDB elastic schema for a book store”. ## llms.txt There is also a text version of all of our documentation available at `https://docs.stately.cloud/llms-full.txt`. If you provide this as context, your assistant will do a much better job of writing StatelyDB schemas and helping you with developing on StatelyDB. ## Example: Movies Schema (schema/movies) ``` ``` This schema is used in the [API Reference](/api/put) examples - it also shows a selection of interesting schema features: ``` import { durationSeconds, int, itemType, string, timestampMilliseconds, type, uint, uuid, } from "@stately-cloud/schema"; // You can define custom types like this to make your schema more // readable as well as ensure consistent types. Here we assign a UUID // type to the MovieId type to ensure that all MovieId fields are UUIDs. /** The unique ID of a movie */ const MovieId = type("MovieId", uuid); // Here we assign a UUID type to the ActorId type to ensure that all // ActorId fields are UUIDs. /** The unique ID of an actor */ const ActorId = type("ActorId", uuid); itemType("Movie", { keyPath: [ // This is the primary key path for the Movie item type, it's // globally unique to your Stately store since :id is configured // with an initial value. "/movie-:id", // This key path allows for querying for a movie based on its title "/name-:title/movie-:id", // This key path allows you to query for a movie based on a // genre/year "/genres-:genre/years-:year/movie-:id", ], ttl: { source: "fromCreated", durationSeconds: 60, }, fields: { genre: { type: string }, year: { type: int }, title: { type: string }, id: { type: MovieId, initialValue: "uuid" }, duration: { type: durationSeconds }, rating: { type: string }, created: { type: timestampMilliseconds, fromMetadata: "createdAtTime", }, updated: { type: timestampMilliseconds, fromMetadata: "lastModifiedAtTime", }, }, }); /** * A character is a role played by an actor in a movie. In this example * we model that actors can play multiple characters in a movie. */ itemType("Character", { keyPath: [ // This key path enables queries like: "What characters did actor X // play?" or further specify "What characters did actor X play in // movie Y?" "/actor-:actorId/movie-:movieId/name-:name", // This key path enables queries like: "Who played role X in movie // Y?" Even more generally: "What characters were in movie Y?" "/movie-:movieId/role-:role/name-:name", ], ttl: { source: "fromCreated", durationSeconds: 60, }, fields: { actorId: { type: ActorId }, name: { type: string }, role: { type: string }, movieId: { type: MovieId }, created: { type: timestampMilliseconds, fromMetadata: "createdAtTime", }, updated: { type: timestampMilliseconds, fromMetadata: "lastModifiedAtTime", }, }, }); itemType("Actor", { keyPath: [ // This is the primary key path for the Actor item type, it's // globally unique in stately since it's using an initial value "/actor-:id", // This key path allows you to query for all actors with a given // name "/name-:name/actor-:id", ], ttl: { source: "fromCreated", durationSeconds: 60, }, fields: { name: { type: string }, id: { type: ActorId, initialValue: "uuid" }, created: { type: timestampMilliseconds, fromMetadata: "createdAtTime", }, updated: { type: timestampMilliseconds, fromMetadata: "lastModifiedAtTime", }, }, }); /** * Change is similar to an audit log, this would be used to tracks * changes to movies. */ itemType("Change", { keyPath: "/movie-:movieId/change-:id", ttl: { source: "fromCreated", durationSeconds: 60, }, fields: { id: { type: uint, initialValue: "sequence" }, description: { type: string }, field: { type: string }, movieId: { type: MovieId }, }, }); ``` # API ## Put (Create/Replace) (api/put) ``` ``` The **Put** API is how you change the data in your Store. It handles both creating new Items and updating existing Items, with the simple behavior of replacing anything at the same [key path](/concepts/keypaths). For these examples we’ll use the schema defined in [Example: Movies Schema](/schema/movies). ## Creating New Items The **Put** API adds new Items to your Store. You can either provide an Item that has all of its ID fields populated, or if you have an [`initialValue`](/schema/fields#initial-value-fields) field you can leave it unpopulated to allow StatelyDB to choose the ID. In this example, we add a new `Movie`, and we don’t populate its `id` field because that field is defined with an `initialValue` of `uuid`, so StatelyDB will assign a new UUID to it. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func samplePut( ctx context.Context, client stately.Client, ) (uuid.UUID, error) { movie := &schema.Movie{ // Id: []byte(...) will be auto-generated by put // because of its uuid initialValue Title: "Starship Troopers 2", Genre: "Sci-Fi", Year: 1997, Duration: 2*time.Hour + 9*time.Minute, Rating: "R", } item, err := client.Put(ctx, movie) if err != nil { return uuid.Nil, err } movie = item.(*schema.Movie) return movie.Id, nil } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_put(client) movie = StatelyDB::Types::Movie.new( # id will be auto-generated by put # because of its uuid initialValue title: 'Starship Troopers 2', year: 2004, genre: 'Sci-Fi', duration: 7880, rating: 'R' ) # Add the movie item into StatelyDB. item = client.put(movie) item.id end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_put(client: Client) -> UUID: movie = Movie( # id will be auto-generated by put because of its uuid # initialValue title="Starship Troopers 2", year=2004, genre="Sci-Fi", duration=7880, rating="R", ) # Add the movie item into StatelyDB. item = await client.put(movie) return item.id ``` ### TypeScript ``` async function samplePut(client: DatabaseClient) { let movie = await client.put( client.create("Movie", { // id will be auto-generated by put // because of its uuid initialValue title: "Starship Troopers 2", year: 2004n, genre: "Sci-Fi", duration: 7880n, rating: "R", }), ); return movie.id; } ``` ## Replacing Items **Put** can also be used to replace existing Items. We say “replace” instead of “update” because the new Item completely replaces the old Item. To replace an existing Item, you Put an Item that has the same ID fields as an existing Item, and that will overwrite the original Item. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleUpdate( ctx context.Context, client stately.Client, movieID uuid.UUID, ) error { // Replace the Movie at movieID with this new item. _, err := client.Put(ctx, &schema.Movie{ Id: movieID, Title: "Starship Troopers", Year: 1997, Genre: "Sci-Fi", Duration: 2*time.Hour + 9*time.Minute, Rating: "R", }) if err != nil { return err } return nil } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_update(client, movie_id) # Replace the Movie at movie_id with this new item. client.put(StatelyDB::Types::Movie.new( id: movie_id, title: 'Starship Troopers', year: 1997, genre: 'Sci-Fi', duration: 7890, rating: 'R' )) end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_update(client: Client, movie_id: UUID) -> None: # Replace the Movie at movie_id with this new item. await client.put( Movie( id=movie_id, title="Starship Troopers", year=1997, genre="Sci-Fi", duration=7890, rating="R", ) ) ``` ### TypeScript ``` async function sampleUpdate( client: DatabaseClient, movieId: Uint8Array, ) { // Replace the Movie at movieId with this new item. await client.put( client.create("Movie", { id: movieId, genre: "action", year: 1997n, title: "Starship Troopers", rating: "R", duration: 7_740n, }), ); } ``` ### CLI ``` stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "id": "2hC3sMFFSlelJlFf9hRD9g", "title": "Starship Troopers", "rated": "R", "duration_seconds": 7740, "genre": "Sci-Fi", "year": 1997 }' ``` # Batch Put If you have multiple Items, it can be more efficient to put them all at once using **PutBatch**. This allows up to 50 Items, and the puts are applied atomically, meaning either all puts will succeed, or none of them will. You can combine new Items and updates to existing Items in the same batch - Items that have an `initialValue` will always be newly created, while existing Items will be replaced. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleBatchPut( ctx context.Context, client stately.Client, ) error { // Put some thriller movies _, err := client.PutBatch(ctx, &schema.Movie{ Title: "Seven", Rating: "R", Year: 1995, Genre: "Thriller", Duration: 2*time.Hour + 7*time.Minute, }, &schema.Movie{ Title: "Heat", Rating: "R", Year: 1995, Genre: "Thriller", Duration: 2*time.Hour + 50*time.Minute, }, ) return err } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def batch_put(client) result = client.put_batch( StatelyDB::Types::Movie.new( title: 'Alien', year: 1979, genre: 'Sci-Fi', duration: 7020, rating: 'R' ), StatelyDB::Types::Movie.new( title: 'Aliens', year: 1986, genre: 'Sci-Fi', duration: 9480, rating: 'R' ), ) result.each do |item| puts "Item Put: [#{item.id}] #{item.title}" end end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def batch_put(client: Client) -> None: movies = [ Movie( title="Alien", year=1979, genre="Sci-Fi", duration=7020, rating="R", ), Movie( title="Aliens", year=1986, genre="Sci-Fi", duration=9480, rating="R", ), ] results = await client.put_batch(*movies) for result in results: print(f"Item Put: [{result.id}] {result.title}") ``` ### TypeScript ``` async function sampleBatchPut(client: DatabaseClient) { interface Movie { title: string; rated: string; duration: string; } // Put two movies await client.putBatch( client.create("Movie", { title: "Alien", year: 1979n, genre: "Sci-Fi", duration: 7020n, rating: "R", }), client.create("Movie", { title: "Aliens", year: 1986n, genre: "Sci-Fi", duration: 9480n, rating: "R", }), ); } ``` ## Unique Constraints When StatelyDB chooses the ID for your `initialValue` field, the Item is guaranteed to be new - it won’t overwrite any existing Item. You can use this to your advantage to implement “unique constraints”—if a value is generated for an `initialValue` field and that field is used in one of the Item’s [key paths](/concepts/keypaths), StatelyDB also ensures that an Item doesn’t already exist under any of the Item’s other key paths. For example, imagine you have a `User` Item type that defines two key paths, `/user-:id` and `/email-:email`, and the `id` field has `initialValue: "uuid"`. When you Put a User without specifying its ID, StatelyDB will generate a new UUID for the ID, and will fail the Put if another User already exists with the same email. ## Partial and Conditional Updates StatelyDB does not currently have an API for partial updates (changing only some fields of an Item). For now, those operations are best handled using a [transaction](/api/transaction). Within a transaction, you can Get the existing state of an Item, make changes to it (or return early if your condition is not met), and then Put the Item back to the Store. The transaction guarantees that the Item didn’t change during that sequence of operations. This is very flexible since you can implement any logic you want within a transaction, but we do intend to introduce convenience APIs for partial updates and some common conditions in the future. ### Must Not Exist However, a common pattern is to create an item only if that item (or another item with the same ID) doesn’t exist. You can use the Put API with per-Item options to specify a “Must Not Exist” constraint on the Put. The Put will fail if the item already exists at one or more of its key paths. This works for both singular and batch updates, and you can mix and match items with a “Must Not Exist” constraint with ones that don’t in a single batch. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func samplePutMustNotExist( ctx context.Context, client stately.Client, movie *schema.Movie, ) (*schema.Movie, error) { // This will fail if the movie already exists item, err := client.Put(ctx, stately.WithPutOptions{ Item: movie, MustNotExist: true, }) if err != nil { return nil, err } return item.(*schema.Movie), nil } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_put_must_not_exist(client, movie) # This will fail if the movie already exists item = client.put(movie, must_not_exist: true) # or: item = client.put_batch( # {item: movie, must_not_exist: true}, another_item) item.id end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_put_must_not_exist(client: Client, movie: Movie) -> UUID: # This will fail if the movie already exists item = await client.put(movie, must_not_exist=True) # or: item = await client.put_batch( # WithPutOptions(item=movie, must_not_exist=True), another_item) return item.id ``` ### TypeScript ``` async function samplePutMustNotExist( client: DatabaseClient, movie: Movie, ) { const newMovie = await client.put(movie, { mustNotExist: true }); // or: const newMovie = await client.putBatch( // { item: movie, mustNotExist: true }, anotherItem); return newMovie.id; } ``` ## Get (api/get) ``` ``` **Get** allows you to retrieve [Items](/concepts/items) by one of their full [key paths](/concepts/keypaths). If no Item exists at that key path, nothing is returned. Like Put, Get supports batch requests of up to 50 key paths at a time, returning all the items that exist. In order to Get an Item you must know one of its key paths, which were [defined](/schema/keypaths) as part of the Item Type’s schema. Each SDK contains some helpers to build key paths. Clients automatically parse the returned Items into typed objects, though depending on the language you may have different methods for checking which type an Item is. For this example we’ll use the schema defined in [Example: Movies Schema](/schema/movies), which declares `Movie` as an Item Type with the key path `/movies-:id`. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleGet( ctx context.Context, client stately.Client, movieID uuid.UUID, ) (*schema.Movie, error) { item, err := client.Get( ctx, // Construct the key path for the movie "/movie-"+stately.ToKeyID(movieID[:]), ) if err != nil { return nil, err } if item == nil { // Item not found return nil, nil } if movie, ok := item.(*schema.Movie); ok { return movie, nil } return nil, fmt.Errorf("item is not a Movie") } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_get(client, movie_id) # Construct the key path for the movie key_path = StatelyDB::KeyPath.with('movie', movie_id) item = client.get(key_path) return item end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_get(client: Client, movie_id: UUID) -> Movie | None: # Construct the key path for the movie kp = key_path("/movie-{id}", id=movie_id) # Tell get that we expect a Movie return await client.get(Movie, kp) ``` ### TypeScript ``` async function sampleGet( client: DatabaseClient, movieId: Uint8Array, ): Promise { // Construct the key path for the movie const kp = keyPath`/movie-${movieId}`; // Tell get that we expect a Movie const item = await client.get("Movie", kp); return item; } ``` ### CLI ``` # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item get \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' ``` ## Delete (api/delete) ``` ``` **Delete** allows you to remove [Items](/concepts/items) by one of their [key paths](/concepts/keypaths). You can use any of the Item’s key paths and the Item will be deleted from all of them. Delete doesn’t return anything, whether or not anything was actually deleted (e.g. if there wasn’t any Item at the given key path). Delete also supports batches of up to 50 key paths at a time. For this example we’ll use the schema defined in [Example: Movies Schema](/schema/movies), which defines `Movie` as an Item Type with the key path `/movies-:id`. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleDelete( ctx context.Context, client stately.Client, movieID uuid.UUID, ) error { return client.Delete(ctx, "/movie-"+stately.ToKeyID(movieID[:])) } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_delete(client, movie_id) key_path = StatelyDB::KeyPath.with('movie', movie_id) client.delete(key_path) end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_delete(client: Client, movie_id: UUID) -> None: kp = key_path("/movie-{id}", id=movie_id) await client.delete(kp) ``` ### TypeScript ``` async function sampleDelete(client: DatabaseClient) { // Put a movie let movie = await client.put( client.create("Movie", { genre: "action", year: 1997n, title: "Starship Troopers", rating: "R", duration: 7_740n, }), ); // Delete the movie await client.del(keyPath`/movie-${movie.id}`); } ``` ### CLI ``` # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item delete \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' ``` ## Tombstones Deleted Items leave behind a “tombstone” record which lets you know that they have been deleted. You can’t interact with the tombstone through any of our APIs, but it’s important that they exist to allow [SyncList](/api/sync/) to tell you about Items that have been deleted since your last sync. Tombstones take up very little space and are automatically cleaned up after some time (on the order of weeks). ## Listing Items (api/list) ``` ``` The **List** family of APIs are one of the most useful APIs in StatelyDB. While APIs like [Put](/api/put), [Get](/api/get), and [Delete](/api/delete/) allow you to operate on individual [Items](/concepts/items) by their full [key path](/concepts/keypaths), List lets you fetch multiple Items from the same [Group](/concepts/groups) in one go (i.e. Items that share the same [Group Key](/concepts/groups)). This can be useful for fetching collections of Items such as a customer’s order history, or a list of messages in a conversation. It’s especially useful for implementing features like infinite scrolling lists, where you want to keep fetching more Items as the user scrolls down. ## Beginning a Paginated List You start a list by calling **BeginList** with a key path prefix and an optional limit to get an initial result set. StatelyDB will return all the Items whose key paths have the given prefix. See [Constraining a List Operation](#constraining-a-list-operation) for other ways to constrain the result set of a List operation. Your initial result set might not contain all the Items under that prefix if you set a page size limit. This initial result set is your first “page” of data - imagine it as the first several screens full of emails in your email app. For this example we’ll use the schema defined in [Example: Movies Schema](/schema/movies), which defines these key paths (among others): | Item Type | Key Path Template | | --------- | --------------------------------------- | | Movie | `/movie-:id` | | Character | `/movie-:movieId/role-:role/name-:name` | Note how both Movie and Character share the same `/movie-:id` prefix. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleList( ctx context.Context, client stately.Client, movieID uuid.UUID, ) (*stately.ListToken, error) { iter, err := client.BeginList( ctx, // This key path is a prefix of BOTH Movie and Character. "/movie-"+stately.ToKeyID(movieID[:]), stately.ListOptions{Limit: 10}, ) if err != nil { return nil, err } for iter.Next() { item := iter.Value() switch v := item.(type) { case *schema.Movie: fmt.Printf("Movie Title: %s\n", v.GetTitle()) case *schema.Character: fmt.Printf("Character Name: %s\n", v.GetName()) } } // When we've exhausted the iterator, we'll get a token that we // can use to fetch the next page of items. return iter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_list(client, movie_id) # This key path is a prefix of BOTH Movie and Character. key_path = StatelyDB::KeyPath.with('movie', movie_id) begin_list_result, token = client.begin_list(key_path, limit: 10) begin_list_result.each do |item| case item when StatelyDB::Types::Movie puts "[Movie] title: #{item.title}" when StatelyDB::Types::Character puts "[Character] name: #{item.name}" end end return token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_list(client: Client, movie_id: UUID) -> ListToken: # This key path is a prefix of BOTH Movie and Character. prefix = key_path("/movie-{id}", id=movie_id) list_resp = await client.begin_list(prefix, limit=10) async for item in list_resp: if isinstance(item, Movie): print(f"[Movie] title: {item.title}") elif isinstance(item, Character): print(f"[Character] name: {item.name}") # When we've exhausted the iterator, we'll get a token that we can # use to fetch the next page of items. return list_resp.token ``` ### TypeScript ``` async function sampleList( client: DatabaseClient, movieId: Uint8Array, ): Promise { // This key path is a prefix of BOTH Movie and Character. const prefix = keyPath`/movie-${movieId}`; let iter = client.beginList(prefix, { limit: 10, }); for await (const item of iter) { if (client.isType(item, "Movie")) { console.log("Movie:", item.title); } else if (client.isType(item, "Character")) { console.log("Character:", item.name); } } return iter.token!; } ``` ### CLI ``` # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item list \ --store-id \ --item-path-prefix '/movie-2hC3sMFFSlelJlFf9hRD9g' ``` ### Constraining a List Operation While List operations can fetch items in the same group, it is also possible to optimize the result set of a List operation further to fetch only the items you need. Below are some of the ways you can constrain a List operation. #### Key Prefix Key paths are more than just unique identifiers for items in a database. Their structure defines the *questions* you can efficiently answer with a List operation. The key prefix you use is a lot like choosing which question you want to ask StatelyDB. This is why it is so important to carefully design your key paths so that your most common questions can be efficiently answered. To illustrate this let’s think about a few questions we might want to ask our [Example: Movies Schema](/schema/movies) about `Movie` and `Character` item types. Then we’ll define a key path that can answer that question and a prefix to use. Note: The key paths below are not necessarily defined in the schema example we’re referencing, see parting thought 1 below the table. | # | Question | Item Type | Key Path | Key prefix to use | | --- | ------------------------------------------------------------ | --------- | ------------------------------------------------------- | --------------------------------- | | 1 | What were all the movies released in 2023? | Movie | `/years-:year/genres-:genre/movie-:id` | `/years-2023` | | 2.a | What were all the Comedies released in 2023? | Movie | `/years-:year/genres-:genre/movie-:id` | `/years-2023/genres-Comedy` | | 2.b | What were all the Comedies released in 2023? | Movie | `/genres-:genre/years-:year/movie-:id` | `/genres-Comedy/years-2023` | | 3 | What Comedies were released over all time? | Movie | `/genres-:genre/years-:year/movie-:id` | `/genres-Comedy` | | 4 | What were all Character roles in movie X? | Character | `/movie-:movieId/role-:role/actor-:actorId` | `/movie-:X/role` | | 5 | Who were all the actors who played role Y? | Character | `/movie-:movieId/role-:role/actor-:actorId` | `/movie-:X/role-:Y/actor` | | 6 | What are all the roles/characters played by actor Z? | Character | `/actor-:actorId/years-:year/movie-:movieId/role-:role` | `/actor-:Z` | | 7 | What are all the roles/characters played by actor Z in 2024? | Character | `/actor-:actorId/years-:year/movie-:movieId/role-:role` | `/actor-:Z/year-2024` | | 8.a | What were all the roles played actor Z in movie X? | Character | `/actor-:actorId/years-:year/movie-:movieId/role-:role` | `/actor-:Z/year-:X.year/movie-:X` | | 8.b | What were all the roles played actor Z in movie X? | Character | `/actor-:actorId/movie-:movieId/role-:role` | `/actor-:Z/movie-:X` | Some parting thoughts: 1. It is important to recognize that in any schema you may not want to define key paths to answer every question. In DynamoDB-backed stores, the cost of a write is significantly more expensive than the cost of a read, so there is a balance to strike. It might feel counterintuitive, but sometimes a less efficient list operation (e.g. one with other ItemTypes in its path) with filtering can be more cost-effective than writing an additional key path in order to create an index of only the desired items. Thus, choosing which key path(s) to write is a balance between read and write cost efficiency and your application’s access patterns. For example, in the table above, there are two key paths that can answer “What were all the Comedies released in 2023?” (2.a and 2.b). So if you only need to fetch movies by year, and never (or rarely) all years by genre, you might choose to define the key path `/years-:year/genres-:genre/movie-:id` instead of `/genres-:genre/years-:year/movie-:id`. This is because the first key path answers question 1, “What were all the movies released in 2023?” more efficiently than the second key path, which would require fetching all genres for each movie in 2023. 2. Our questions above are a little simplistic because they focus on fetching a single item type. But in practice, you may find that you want to fetch multiple item types at once, or that you want to fetch items that are related to each other. For example, you might want to fetch all the movies in a certain genre AND for each movie fetch all the characters in that movie. A movie key path `/genres-:genre/movie-:id` and a Character key path of `/genres-:genre/movie-:id/role-:role/actor-:actorId` can be used to answer this question with the prefix `/genres-:genre`. 3. Sometimes multiple key prefixes can be used to answer the same question with the same underlying key path. Question 5 above can be answered without the trailing namespace `actor`. But that is only because there is currently no other ItemType that shares the `/movie-:id/role-:role` prefix. As a general rule, you should always use the most specific key prefix that answers your question. This will help you avoid fetching more items than you need, and will also help you avoid potential conflicts with other item types in the future. #### Key Path Range Sometimes we want to fetch a range of items that share the same key prefix, but we want to limit the results to a specific range of key paths. For example, if we want to know “What are all the Comedies released from 2020 to 2023 (inclusive)?”. This can be answered with the key prefix `/genres-Comedy/years` but the result set would include all years and need to be filtered not just 2020 to 2023. Here we can use key constraints to narrow the scope of our list operation: `GreaterThanEqualTo` `/genres-Comedy/years-2020` and `LessThanEqualTo` `/genres-Comedy/years-2023`. These constraints are applied at the database layer to give you the most optimized list experience, and so you don’t have to filter the results in your application code. Examples: ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleListWithConstraints( ctx context.Context, client stately.Client, genre string, startYear int32, endYear int32, ) (*stately.ListToken, error) { prefixKey := "/genres-" + stately.ToKeyID(genre) + "/years-" iter, err := client.BeginList( ctx, prefixKey, stately.ListOptions{}. WithKeyGreaterThanOrEqualTo(prefixKey+stately.ToKeyID(startYear)). WithKeyLessThanOrEqualTo(prefixKey+stately.ToKeyID(endYear)). WithItemTypesToInclude("Movie"), ) if err != nil { return nil, err } for iter.Next() { move := iter.Value().(*schema.Movie) // We know this is a Movie because of the item type filter! fmt.Printf("Movie Title: %s\n", move.GetTitle()) } // When we've exhausted the iterator, we'll get a token that we // can use to fetch the next page of items. return iter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_list_with_constraints(client, genre, start_year, end_year) # key prefix: `/genres-/years-` key_prefix = StatelyDB::KeyPath.with('genres', genre).with("years") begin_list_result, token = client.begin_list( key_path, item_types: ['Movie'], # Greater Than or Equal to `/genres-/years-` gte: StatelyDB::KeyPath.with('genres', genre).with("years", start_year), # Less Than or Equal to `/genres-/years-` lte: StatelyDB::KeyPath.with('genres', genre).with("years", end_year), ) begin_list_result.each do |item| # Note! we know that the item is a 'Movie' because we specified # item_types=['Movie'] in the begin_list call. puts "[Movie] title: #{item.title}" end return token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_list_with_constraints( client: Client, genre: str, start_year: int, end_year: int ) -> ListToken: # This key path is a prefix of BOTH Movie and Character. prefix = key_path("/genres-{genre}/years-", genre=genre) list_resp = await client.begin_list( prefix, item_types=[Movie], gte=key_path("/genres-{genre}/years-{year}", genre=genre, year=start_year), lte=key_path("/genres-{genre}/years-{year}", genre=genre, year=end_year), ) async for item in list_resp: # Note! we know that the item is a 'Movie' because we specified # item_types=[Movie] in the begin_list call. print(f"[Movie] title: {item.title}") # When we've exhausted the iterator, we'll get a token that we can # use to fetch the next page of items. return list_resp.token ``` ### TypeScript ``` async function sampleListWithConstraints( client: DatabaseClient, genre: string, startYear: number, endYear: number, ): Promise { const prefix = keyPath`/genres-${genre}/years-`; let iter = client.beginList(prefix, { gte: keyPath`/genres-${genre}/years-${startYear}`, lte: keyPath`/genres-${genre}/years-${endYear}`, itemTypes: ["Movie"], }); for await (const item of iter) { // Note: `item` is guaranteed to be a Movie here // because of the `itemTypes: ["Movie"]` constraint. console.log("Movie:", (item as Movie).title); } return iter.token!; } ``` ### CLI ``` #!/usr/bin/env bash # begin-sample: update stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "id": "2hC3sMFFSlelJlFf9hRD9g", "title": "Starship Troopers", "rated": "R", "duration_seconds": 7740, "genre": "Sci-Fi", "year": 1997 }' # end-sample # begin-sample: put stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "title": "Starship Troopers 2", "year": 2004, "genre": "Sci-Fi", "duration": 7880, "rating": "R", }' # end-sample # begin-sample: get # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item get \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: delete # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item delete \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: list # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item list \ --store-id \ --item-path-prefix '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: scan stately item scan \ --store-id \ --item-types Movie,Actor # end-sample ``` #### Filters List results can also be limited by applying filters to the result set. Unlike [Key Prefix](#key-prefix) and [Key Path Range](#key-path-range) constraints which are applied at a database layer to reduce the number of items read, filters are applied after the initial result set is fetched. This means that you are still charged for reading items which are filtered out. Therefore, it is still important to use [Key Prefix](#key-prefix) and [Key Path Range](#key-path-range) constraints where possible to optimize your List operations. This doesn’t mean using a filter is bad! In fact, it can sometimes be more cost-effective to apply filters to list operations than to write items in dedicated indexes or with additional key paths just to limit the use of filters. There are two kinds of filters StatelyDB supports: ###### Item Type Filter The first is an item type filter, which allows you to specify which item types you want to include in the result set. This is useful when there are multiple item types a group which could be returned by the query but when you only care about specific item types. If this filter is not specified, all item types found via the list operation are included in the result set. For example, `Movies` and `Characters` in [Example: Movies Schema](/schema/movies), have the key paths `/movie-:id` and `/movie-:movieId/role-:role/name-:name`, respectively. So a [Key Prefix](#key-prefix) of `/movie-` will return both `Movies` and `Characters` for that movie. To ensure only `Character` item types are returned for a specific movie, you can use an item type filter with the value `Character`. All other types will be excluded or filtered out of the result set. To see language-specific examples of how to use item type filters in a List operation, see the example tables under [Key Path Range](#key-path-range) and [CEL Expression Filter](#cel-expression-filter). ###### CEL Expression Filter The second is a CEL expression filter, which allows you to specify any arbitrary conditions that an item type must satisfy to be included in the result set. CEL expression filters use the [CEL language](https://github.com/google/cel-spec/blob/master/doc/langdef.md) spec providing you with a powerful, flexible way to filter items based on their properties, relationships, and other criteria. For example, you could construct a filter that would include movies rated PG-13 less than 2 hours in duration, released in a year that is a multiple of 3. CEL expression filters only apply to a single item type at a time, and do not affect other item types in a result set. This means that if an item type isn’t mentioned in a CEL expression filter and there are no [item type filter](#item-type-filter) constraints, it will be included in the result set. In the context of a CEL expression, the key-word `this` refers to the item being evaluated, and property properties should be accessed by the names as they appear in schema — not necessarily as they appear in the generated code for a particular language. For example, if you have a `Movie` item type with the property `rating`, you could write a CEL expression like `this.rating == 'R'` to return only movies that are rated `R`. See the following examples for how to use filters in a List operation by language: ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleListWithFilters( ctx context.Context, client stately.Client, genre string, ) (*stately.ListToken, error) { prefixKey := "/genres-" + stately.ToKeyID(genre) + "/years-" iter, err := client.BeginList( ctx, prefixKey, stately.ListOptions{}. WithItemTypesToInclude("Movie"). // This filter will ONLY return movies that are... // 1. Rated PG-13 // 2. Have a duration of less than 2 hours // 3. Released in a year that is a multiple of 3 WithCelExpressionFilter("Movie", "this.rating == 'PG-13' && this.duration < duration('2h').getSeconds() && this.year % 3 == 0"), ) if err != nil { return nil, err } for iter.Next() { move := iter.Value().(*schema.Movie) // We know this is a Movie because of the item type filter! fmt.Printf("Movie Title: %s\n", move.GetTitle()) } // When we've exhausted the iterator, we'll get a token that we // can use to fetch the next page of items. return iter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_list_with_filters(client, genre) # key prefix: `/genres-/years-` key_prefix = StatelyDB::KeyPath.with('genres', genre).with("years") begin_list_result, token = client.begin_list( key_path, item_types: ['Movie'], cel_filters: [ ['Movie', "this.rating == 'PG-13' && this.duration < duration('2h').getSeconds() && this.year % 3 == 0"] ] ) begin_list_result.each do |item| # Note! we know that the item is a 'Movie' because we specified # item_types=['Movie'] in the begin_list call. puts "[Movie] title: #{item.title}" end return token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_list_with_filters(client: Client, genre: str) -> ListToken: # This key path is a prefix of BOTH Movie and Character. prefix = key_path("/genres-{genre}/years-", genre=genre) list_resp = await client.begin_list( prefix, item_types=[Movie], cel_filter=[ [ Movie, "this.rating == 'PG-13' && this.duration < duration('2h').getSeconds() && this.year % 3 == 0", ], ], ) async for item in list_resp: # Note! we know that the item is a 'Movie' because we specified # item_types=[Movie] in the begin_list call. print(f"[Movie] title: {item.title}") # When we've exhausted the iterator, we'll get a token that we can # use to fetch the next page of items. return list_resp.token ``` ### TypeScript ``` async function sampleListWithFilters( client: DatabaseClient, genre: string, ): Promise { const prefix = keyPath`/genres-${genre}/years-`; let iter = client.beginList(prefix, { itemTypes: ["Movie"], celFilters: [ [ "Movie", "this.rating == 'PG-13' && this.duration < duration('2h').getSeconds() && this.year % 3 == 0", ], ], }); for await (const item of iter) { // Note: `item` is guaranteed to be a Movie here // because of the `itemTypes: ["Movie"]` filter above. console.log("Movie:", (item as Movie).title); } return iter.token!; } ``` ### CLI ``` #!/usr/bin/env bash # begin-sample: update stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "id": "2hC3sMFFSlelJlFf9hRD9g", "title": "Starship Troopers", "rated": "R", "duration_seconds": 7740, "genre": "Sci-Fi", "year": 1997 }' # end-sample # begin-sample: put stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "title": "Starship Troopers 2", "year": 2004, "genre": "Sci-Fi", "duration": 7880, "rating": "R", }' # end-sample # begin-sample: get # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item get \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: delete # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item delete \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: list # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item list \ --store-id \ --item-path-prefix '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: scan stately item scan \ --store-id \ --item-types Movie,Actor # end-sample ``` ## Using the List Token to Continue The result from BeginList includes a token which you can save for later, or use right away. Only the `tokenData` part of the token needs to be saved. There is also a `canContinue` property which indicates whether there are more pages still available, and `canSync` which indicates whether [SyncList](/api/sync) is supported for this list. You can pass the token to **ContinueList**, which lets you fetch more results for your result set, continuing from where you left off. For example, you might call ContinueList to get the *next* few screens of emails when the user scrolls down in their inbox. Or, you could call it in the background to eventually pull the entire result set into a local database. All you need is the token—the original arguments to BeginList are saved with it. The token keeps track of the state of the result set, and ContinueList allows you to expand the window of results that you have retrieved. Every time you call ContinueList, you’ll get a new token back, and you can use *that* token for the next ContinueList or SyncList call, and so on. This is also known as pagination, and it allows you to quickly show results to your users without having to get all the data at once, while still having the ability to grab the next results consistently. For many applications, you’ll only need to call BeginList once, to set up the initial token, and from then on they’ll call ContinueList (as a user scrolls through results) or SyncList (whenever the user opens or focuses the application, to check for new and updated items). In this example we’ve passed the token from the first example back into ContinueList to keep getting more Items: ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleContinueList( ctx context.Context, client stately.Client, token *stately.ListToken, ) (*stately.ListToken, error) { iter, err := client.ContinueList(ctx, token.Data) if err != nil { return nil, err } for iter.Next() { item := iter.Value() switch v := item.(type) { case *schema.Character: fmt.Printf("Character Name: %s\n", v.GetName()) case *schema.Movie: fmt.Printf("Movie Title: %s\n", v.GetTitle()) } } // You could save the token to call ContinueList later. return iter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_continue_list(client, token) # Fetch the next page of items continue_list_result, token = client.continue_list(token) continue_list_result.each do |item| case item when StatelyDB::Types::Movie puts "[Movie] title: #{item.title}" when StatelyDB::Types::Character puts "[Character] name: #{item.name}" end end # You could save the token to call ContinueList later. return token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_continue_list(client: Client, token: str) -> ListToken: # Fetch the next page of items continue_list_result = await client.continue_list(token) # Print out the paths of the next batch of listed items async for item in continue_list_result: if isinstance(item, Movie): print(f"[Movie] title: {item.title}") elif isinstance(item, Character): print(f"[Character] name: {item.name}") # You could save the token to call ContinueList later. return continue_list_result.token ``` ### TypeScript ``` async function sampleContinueList( client: DatabaseClient, token: ListToken, ): Promise { // You can call `collect` on the iterator to pull // all the items into an Array. const { items, token: newToken } = await client .continueList(token) .collect(); for (const item of items) { if (client.isType(item, "Movie")) { console.log("Movie:", item.title); } else if (client.isType(item, "Actor")) { console.log("Actor:", item.name); } else if (client.isType(item, "Character")) { console.log("Character:", item.name); } } // You could save the token to call ContinueList later. return newToken; } ``` ### CLI ``` #!/usr/bin/env bash # begin-sample: update stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "id": "2hC3sMFFSlelJlFf9hRD9g", "title": "Starship Troopers", "rated": "R", "duration_seconds": 7740, "genre": "Sci-Fi", "year": 1997 }' # end-sample # begin-sample: put stately item put \ --store-id \ --item-type 'Movie' \ --item-data '{ "title": "Starship Troopers 2", "year": 2004, "genre": "Sci-Fi", "duration": 7880, "rating": "R", }' # end-sample # begin-sample: get # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item get \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: delete # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item delete \ --store-id \ --item-key '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: list # There are no key path helpers for shell, so you need to # manually base64-encode the UUID's bytes stately item list \ --store-id \ --item-path-prefix '/movie-2hC3sMFFSlelJlFf9hRD9g' # end-sample # begin-sample: scan stately item scan \ --store-id \ --item-types Movie,Actor # end-sample ``` ## Sort Direction By default, the items returned in a List are sorted by their key paths, in ascending order. Namespaces, string IDs, and byte IDs are sorted lexicographically, while number IDs are sorted numerically. For example: * `/customer-1234` * `/customer-1234/order-9` * `/customer-1234/order-10` * `/customer-1234/order-10/li-abc` * `/customer-1234/order-10/li-bcd` You can specify a `SortDirection` option to reverse this order (from the default of Ascending to Descending). ## Listing All Groups If you have root item type, this can be achieved using the [Scan](/api/scan) operation to list all Items of the group type. ## Listing Across Client Upgrades The list token you get from BeginList is specific to the schema version your client was built with. If you save that token, and then upgrade your client to another schema version, ContinueList will return a SchemaVersionMismatch error, since there’s no guarantee that the items you had cached are compatible with the new schema version. In this case you should discard the list token and start a new BeginList call. ## Syncing Lists (api/sync) ``` ``` Many applications want to cache or store a list of items from the database, and keep them up to date. However, it can be wasteful to repeatedly fetch the same list of items from scratch, over and over - especially since it’s likely only a few Items (or none!) have changed since the last poll. Instead of fetching the whole list from scratch, you can take an existing [list token](/api/list#using-the-list-token-to-continue) and call **SyncList**. SyncList tells you about any Items which have been added, changed or deleted within your result set since you got that token. Imagine you’ve already gotten the first few screens of emails in your user’s inbox. You can call SyncList with that token, and it will tell you about any new emails, *and* any emails which have changed state (e.g. being marked read). SyncList *won’t* tell you about changes to Items outside your result set window - so you wouldn’t get an update for and email from 100 screens down that changed state, unless you’d called ContinueList enough to include that email in your result set. SyncList allows for building efficient offline-capable clients, cheaply updating caches, and more that would be difficult to maintain if you had to always get the whole result set fresh every time. The same token is used for ContinueList and SyncList, so every time you call either one, you can update your saved token to use for the next time. ## Sync Results SyncList returns a stream of results that are each one of the following cases: 1. **changed** - An item that changed since the last time you saw it. The entire new item is returned with this case. 2. **deleted** - The item at this key path has been deleted since the last time you saw it. You should remove the item matching the key path from any local cache or storage. 3. **reset** - If your token is too old, or something else has changed to make Sync impossible, you’ll get a “reset” message from SyncList as the first result. This means you should throw away any cached or stored data and start over. The rest of the results from this SyncList call will be **changed** cases to help you start your result set over. 4. **updatedOutsideWindow** - This is a special case when syncing lists managed by a Group Local Index. Unlike the Key Property, index values are mutable, and therefore Stately cannot always determine if an item was previously in a result set but has moved or has never been in a given result set. All that Stately can say for sure is that an item has changed and that it is not currently in the sync window. Imagine you’re listing items by date, and you have the last 24 hours of Items, but one of the Items had its date updated to be a week in the past. That Item is no longer in your result set, so you probably don’t want to display it, but it also wasn’t deleted either. The same applies to an item two weeks in the past which was updated with more information while the date remained unchanged. The item has changed, it is not in your sync window, but it might have been at some point in time. In most situations a key returned in `updatedOutsideWindow` should be treated just like a key returned in the `deleted` field; remove this from the cache if it exists. But in some cases, such as those involving a shared cache, it is important to know the difference between a delete and an update outside this window. Here’s an example of syncing changes to the list we fetched in [Listing Items](/api/list): ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleSyncList( ctx context.Context, client stately.Client, token *stately.ListToken, ) (*stately.ListToken, error) { syncIter, err := client.SyncList(ctx, token.Data) if err != nil { return nil, err } for syncIter.Next() { switch r := syncIter.Value().(type) { case *stately.Reset: // This means our token is too old and we need to start over. fmt.Println("Sync operation reset") case *stately.Changed: // This item has changed since we last saw it, we should update our // local copy fmt.Printf("Item has changed: %s\n", r.Item) case *stately.Deleted: // This item has been deleted since we last saw it, we should remove // it from our local copy fmt.Printf("Item has been deleted: %s\n", r.KeyPath) } } return syncIter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_sync_list(client, token) sync_list_result = client.sync_list(token) if sync_list_result.is_reset # This means our token is too old and we need to start over. puts "Sync operation reset" end sync_list_result.changed_items.each do |item| # This item has changed since we last saw it, we should update our # local copy puts "Item has changed: #{item.id}" end sync_list_result.deleted_item_paths.each do |item| # This item has been deleted since we last saw it, we should remove # it from our local copy puts "Item has been deleted: #{item.id}" end return sync_list_result.token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_sync_list(client: Client, token: ListToken) -> None: sync_resp = await client.sync_list(token) async for item in sync_resp: if isinstance(item, SyncReset): # This means our token is too old and we need to start over. print("Sync operation reset") elif isinstance(item, SyncChangedItem): # This item has changed since we last saw it, we should # update our local copy print(f"Item has changed {item}") elif isinstance(item, SyncDeletedItem): # This item has been deleted since we last saw it, we should # remove it from our local copy print(f"Item has been deleted {item.key_path}") token = sync_resp.token ``` ### TypeScript ``` async function sampleSync(client: DatabaseClient): Promise { const syncResp = client.syncList(token.tokenData); for await (const change of syncResp) { switch (change.type) { case "reset": // This means our token is too old and we need to start over. console.log("Sync operation reset"); break; case "changed": // This item has changed since we last saw it, we should update our // local copy console.log("Item has changed:", change.item); break; case "deleted": // This item has been deleted since we last saw it, we // should remove it from our local copy console.log(`Item has been deleted: ${change.keyPath}`); break; default: throw new Error("unexpected change type " + change.type); } } return syncResp.token!; } ``` ## Syncing Across Client Upgrades The list token you get from BeginList is specific to the schema version your client was built with. If you call SyncList with a client that has been upgraded to use a different schema version, you must ensure you properly handle the “reset” change type by clearing out any local state before consuming the updated items. This ensures that your cached result set is all based on a consistent schema version. ## Opting out of Sync While SyncList is a powerful way to reduce bandwidth, CPU, and cost, it does come with its own cost in terms of additional DynamoDB writes to maintain an index used in the sync, as well as maintaining [tombstones](/api/delete#tombstones) on deletes. If you’re sure you don’t want to use SyncList for one or more key paths, you can reduce costs by disabling Sync. You can disable Sync at the key path level: ``` // This item type has one syncable key path and one non-syncable key path itemType("HalfSync", { keyPath: [ // You can call SyncList using this key path "/id-:id", // But not with this key path { path: "/name-:name", syncable: false }, ], fields: { id: { type: string }, name: { type: string } } }); ``` Or you can disable syncable for an entire item type: ``` // This item type doesn't have any syncable key paths itemType("NoSync", { keyPath: [ "/id-:id", "/name-:name", ], // This applies to all key paths syncable: false, fields: { id: { type: string }, name: { type: string } } }); ``` Or you can disable syncable for the entire schema: ``` schemaDefaults({ syncable: false, }) ``` These configs are inherited, so if you set syncable in the schema defaults, you can override it again at the item type or key path level. Likewise, if you set syncable at the item type level, you can override it at the key path level. ## Scanning over a Store (api/scan) ``` ``` The **Scan** family of APIs are very similar to [List](/api/list) but they allow you to list Items across your entire Store. This can be useful scenarios such as: * Migrations and backfills that need to operate on every Item * Custom exporters to other datastores * Auditing/validation workflows * Deleting unwanted data * Building global aggregations (e.g. compute the top X blog posts by comments, or counting the number of items meeting some criteria) Be warned that these operations can be slow and expensive, especially on large Stores. You should use them sparingly and consider using [List](/api/list) instead if you can. These results of a Scan operation are not guaranteed to be in any particular order and Items with [multiple key paths](/concepts/keypaths/#multiple-key-paths) will only be returned once with their primary key path. ## Beginning a Scan Just like for a List operation, you begin by calling **BeginScan**, with your desired parameters. Then you can continue to retrieve more Items by calling **ContinueScan** with the token returned by BeginScan. For this example we’ll use the schema defined in [Example: Movies Schema](/schema/movies), which defines these key paths (among others): | Item Type | Key Path Template | | --------- | ----------------- | | Movie | `/movie-:id` | | Actor | `/actor-:id` | ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleScan( ctx context.Context, client stately.Client, ) (*stately.ListToken, error) { iter, err := client.BeginScan( ctx, stately.ScanOptions{ItemTypes: []string{"Movie", "Actor"}}, ) if err != nil { return nil, err } for iter.Next() { item := iter.Value() switch v := item.(type) { case *schema.Movie: fmt.Printf("Movie Title: %s\n", v.GetTitle()) case *schema.Actor: fmt.Printf("Actor Name: %s\n", v.GetName()) } } // When we've exhausted the iterator, we'll get a token that we // can use to fetch the next page of items. return iter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_scan(client) begin_scan_result, token = client.begin_scan(item_types: ['Movie', 'Actor']) begin_scan_result.each do |item| case item when StatelyDB::Types::Movie puts "[Movie] title: #{item.title}" when StatelyDB::Types::Actor puts "[Actor] name: #{item.name}" end end return token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_scan(client: Client) -> None: scan_resp = await client.begin_scan(item_types=[Movie, Actor]) async for item in scan_resp: if isinstance(item, Movie): print(f"[Movie] title: {item.title}") elif isinstance(item, Actor): print(f"[Actor] name: {item.name}") # When we've exhausted the iterator, we'll get a token that we can # use to fetch the next page of items. return scan_resp.token ``` ### TypeScript ``` async function sampleScan(client: DatabaseClient): Promise { let iter = client.beginScan({ itemTypes: ["Movie", "Actor"], }); for await (const item of iter) { if (client.isType(item, "Movie")) { console.log("Movie:", item.title); } else if (client.isType(item, "Actor")) { console.log("Actor:", item.name); } } return iter.token!; } ``` ### CLI ``` stately item scan \ --store-id \ --item-types Movie,Actor ``` ## Using the List Token to Continue The result from BeginScan includes a list token which you can use to continue in the ContinueScan. Read more about list tokens in [Using the List Token to Continue](/api/list/#using-the-list-token-to-continue). `token.canSync` will always be set to false for Scan operations. ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleContinueScan( ctx context.Context, client stately.Client, token *stately.ListToken, ) (*stately.ListToken, error) { iter, err := client.ContinueScan(ctx, token.Data) if err != nil { return nil, err } for iter.Next() { item := iter.Value() switch v := item.(type) { case *schema.Character: fmt.Printf("Character Name: %s\n", v.GetName()) case *schema.Actor: fmt.Printf("Actor Name: %s\n", v.GetName()) } } // You could save the token to call ContinueScan later. return iter.Token() } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_continue_list(client, token) # Fetch the next page of items continue_scan_result, token = client.continue_scan(token) continue_scan_result.each do |item| case item when StatelyDB::Types::Movie puts "[Movie] title: #{item.title}" when StatelyDB::Types::Actor puts "[Actor] name: #{item.name}" end end # You could save the token to call ContinueScan later. return token end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_continue_scan(client: Client, token: str) -> ListToken: # Fetch the next page of items continue_scan_result = await client.continue_scan(token) # Print out the paths of the next batch of listed items async for item in continue_scan_result: if isinstance(item, Movie): print(f"[Movie] title: {item.title}") elif isinstance(item, Actor): print(f"[Actor] name: {item.name}") # You could save the token to call ContinueScan later. return continue_scan_result.token ``` ### TypeScript ``` async function sampleContinueScan( client: DatabaseClient, token: ListToken, ): Promise { // You can call `collect` on the iterator to pull // all the items into an Array. const { items, token: newToken } = await client .continueScan(token) .collect(); for (const item of items) { if (client.isType(item, "Movie")) { console.log("Movie:", item.title); } else if (client.isType(item, "Actor")) { console.log("Actor:", item.name); } } // You could save the token to call ContinueScan later. return newToken; } ``` ## Filtering You can pass a filter to BeginScan to only retrieve Items that match the filter. We currently support filtering by Item Type. ## Limits Pass a limit to BeginScan to limit the max number of items to retrieve. If limit is set to 0 then the first page of results will be returned which may be empty because all the results were filtered out. Be sure to check `token.canContinue` to see if there are more results to fetch. ## Segmentation Because a Scan operation can be slow and expensive, you can segment the operation into smaller chunks by passing a `totalSegments` and `segmentIndex` parameter to BeginScan. This will allow you to run multiple Scan operations in parallel, each responsible for a different segment of the Store. You can split your scan into up to 1000000 segments. ## Listing Across Client Upgrades Just like for [List operations](/api/list/#listing-across-client-upgrades), you are not able to use a list token across client versions. ## Transactions (api/transaction) ``` ``` **Transactions** allow you to perform a sequence of different operations together, and all changes will be applied at once, or not at all. To put it another way, transactions allow you to modify multiple Items without worrying that you’ll end up with an inconsistent state where only one of the changes was applied. Furthermore, you can do reads ([Get](/api/get), [List](/api/list)) within the transaction, and use the results of those reads to decide what to write, without worrying about whether the data you read changed between the read and the write. These are called “read-modify-write” transactions and they allow you to perform complex updates that depend on the original state of an Item or even the state of other Items. ## Using the Transaction API The Transaction API is interactive - you pass in a function that takes a Transaction as an argument, and then within that function you can do any sequence of operations you like. If your function returns without an error, all the changes are committed together. If your function throws an error, none of the changes are committed. The result of the transaction contains information about all the Items that were created or updated within the transaction, since their metadata isn’t computed until the transaction commits. If nothing was changed in the transaction, it is effectively a no-op. If any other concurrent operation (another transaction or other write) modified any of the Items you read in your transaction, the transaction will fail (rollback) and return a specific error. You can retry the transaction, and your function will be called again with a fresh Transaction and have a chance to repeat the sequence of reads and writes with the latest data, potentially resulting in different decisions. Please keep in mind that your transaction function needs to be idempotent if you plan to retry it - if you modify other state in memory or call non-transactional APIs within the function, you won’t get the benefits of transactional consistency. You should also try to keep your transaction functions fast, since the longer the transaction is active, the higher the chance something else will try to modify those same items. For this example we’ll use the schema defined in [Example: Movies Schema](/schema/movies), which declares a `Movie` Item type. The transaction reads a Movie, updates its rating, and saves it back, and also saves a change log entry Item: ### Go ``` package main import ( "context" "fmt" "os" "slices" "strconv" "time" "github.com/google/uuid" "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) func sampleTransaction( ctx context.Context, client stately.Client, ) error { starshipTroopers := &schema.Movie{ Title: "Starship Troopers", Rating: "G", // nope Duration: 2*time.Hour + 9*time.Minute, Genre: "Sci-Fi", Year: 1997, } item, err := client.Put(ctx, starshipTroopers) if err != nil { return err } newMovie := item.(*schema.Movie) _, err = client.NewTransaction( ctx, func(txn stately.Transaction) error { // Get the movie we just put item, err := txn.Get(newMovie.KeyPath()) if err != nil { return err } // Update the rating starshipTroopers = item.(*schema.Movie) starshipTroopers.Rating = "R" _, err = txn.Put(starshipTroopers) if err != nil { return err } // And add a change log entry as a child item - this will only exist // if the rating change also succeeds _, err = txn.Put(&schema.Change{ Field: "Rated", Description: "G -> R", MovieId: starshipTroopers.Id, }) return err }, ) return err } ``` ### Ruby ``` require 'bundler/setup' require_relative 'schema/stately' require 'byebug' def sample_transaction(client) movie = StatelyDB::Types::Movie.new( title: 'Nightmare on Elm Street', year: 1984, genre: 'Horror', duration: 6060, rating: 'G' # nope ) result = client.put(movie) result = client.transaction do |tx| # Get the movie item key_path = StatelyDB::KeyPath.with('movie', result.id) movie = tx.get(key_path) # Update the rating movie.rating = 'R' tx.put(movie) # And add a change log entry as a child item - this will only exist if # the rating change also succeeds tx.put(StatelyDB::Types::Change.new( movie_id: movie.id, field: 'rating', description: 'Updated rating from G to R' )) end # Get the Change log entry out of the transaction result. Note that we're # grabbing the second item in the result.puts portion of the response # since it is the second item we put in the transaction. change_log_entry = result.puts.at(1) # Display the Change log entry puts "Change Log Entry:" puts " ID: #{change_log_entry.id}" puts " Field: #{change_log_entry.field}" puts " Description: #{change_log_entry.description}" end ``` ### Python ``` from __future__ import annotations from typing import TYPE_CHECKING from statelydb import ListToken, SyncChangedItem, SyncDeletedItem, SyncReset, key_path from .schema import Actor, Change, Character, Client, Movie async def sample_transaction(client: Client) -> None: movie = Movie( title="Nightmare on Elm Street", year=1984, genre="Horror", duration=6060, rating="G", # nope ) result = await client.put(movie) txn = await client.transaction() async with txn: # Get the movie item kp = key_path("/movie-{id}", id=result.id) movie = await txn.get(Movie, kp) # Update the rating movie.rating = "R" await txn.put(movie) # And add a change log entry as a child item - this will only # exist if the rating change also succeeds await txn.put( Change( movie_id=movie.id, field="rating", description="Updated rating from G to R", ), ) # Get the Change log entry out of the transaction result. Note that # we're grabbing the second item in the result.puts portion of the # response since it is the second item we put in the transaction. change_log_entry = txn.result.puts[1] # Display the Change log entry print("Change Log Entry:") print(f" ID: {change_log_entry.id}") print(f" Field: {change_log_entry.field}") print(f" Description: {change_log_entry.description}") ``` ### TypeScript ``` async function sampleTransaction(client: DatabaseClient) { const originalMovie = await client.put( client.create("Movie", { genre: "action", year: 1997n, title: "Starship Troopers", rating: "G", // nope duration: 7_740n, }), ); const result = await client.transaction(async (txn) => { // Don't forget to await each of the operations! const movie = await txn.get( "Movie", keyPath`/movie-${originalMovie.id}`, ); // Fix the rating if (movie && movie.rating != "R") { movie.rating = "R"; await txn.put(movie); // And add a change log entry as a child item - this will only // exist if the rating change also succeeds await txn.put( client.create("Change", { movieId: movie.id, field: "Rated", description: "G -> R", }), ); } // No error means the transaction will be committed, if nothing else // changed that movie in the meantime }); // Get the Change log entry out of the transaction result. Note that // we're grabbing the second item in the result.puts portion of the // response since it is the second item we put in the transaction. const changeLogEntry = result.puts[1]; console.log(changeLogEntry); } ``` ## Puts in Transactions The **Put** API behaves a bit differently inside a transaction than it does outside. Inside a transaction, the **Put** API will only return an ID, not the whole item. If your item uses an `initialValue`, the returned ID will be the ID that StatelyDB chose for that field. Otherwise, Put won’t return anything. You can use this returned ID in subsequent Puts to build hierarchies of items that reference each other. For example you might Put a Customer (returning a new Customer ID), and then use that Customer ID to Put an Address. You also can’t Get any of the items you’ve Put, until the transaction is committed. That’s because those items haven’t actually been created yet - they all get created together when the transaction commits. ## Cross-Group Transactions A transaction can read and write Items from different [Groups](/concepts/groups). This is a very powerful tool but can be less efficient than isolating updates to a single group. Thus, where possible, we recommend adjusting your data model to ensure that all Items which need to be updated together are in the same Group. ## Cross-Store transactions One current limitation of StatelyDB, which we are working to address, is that transactions are limited to Items in a single [Store](/concepts/stores). A transaction can read and write any Items that are in the same Store, but there is currently no way to specify a separate store to also transact on. ## Transaction Isolation Transactions in StatelyDB are strongly isolated, meaning that you can be sure that no other operations (other transactions or even individual writes) can interfere with the operations in your transaction. If you’re familiar with SQL isolation levels, this corresponds to the [`SERIALIZABLE` isolation level](https://www.postgresql.org/docs/current/sql-set-transaction.html), which is the strongest guarantee of consistency. For example, if you do a read-modify-write transaction, you can be sure that the data you read has not been modified by any other operation before the transaction completes (otherwise, you might have made decisions based on outdated data!). The tradeoff is that StatelyDB will fail your transaction if it detects that another operation has modified the data you’re reading or writing. This is a good thing, because it means you can be sure that your transactional changes are consistent, but it also means that you need to be prepared to handle transaction failures and retry the transaction if necessary. ## Errors (api/error-codes) ``` ``` Each SDK has a custom error type which allows you to get additional details about problems that occur while calling APIs. It has a standard message that can be logged, but it also contains the following useful fields: 1. `Code` - A [ConnectRPC (gRPC) status code](https://connectrpc.com/docs/protocol#error-codes) that indicates the high level category of the error. 2. `StatelyCode` - A string that describes a specific error condition. You can check these codes to handle specific error cases. There are constants available in each SDK for the subset of StatelyCodes that we expect users to have to handle. Note that the API may return new StatelyCodes that are not yet covered by the SDK constants. | Language | Custom Error Type | StatelyCode Constants | | ---------- | ----------------------------------------------- | ---------------------------------------------------------- | | Go | `github.com/StatelyCloud/go-sdk/sdkerror.Error` | `github.com/StatelyCloud/go-sdk/sdkerror.StatelyErrorCode` | | TypeScript | `StatelyError` | `ErrorCode` | | Ruby | `StatelyDB::Error` | `StatelyCode` | | Python | `statelydb.StatelyError` | `statelydb.StatelyCode` | ## StatelyCode Reference The following is a non-exhaustive list of error codes that can be returned by the Stately SDKs, and our recommendation for handling it. If you encounter an error code that is not listed here, please [let us know](mailto:support@stately.cloud). ### BackupsUnavailable **Code:** `Unavailable`\ BackupsUnavailable indicates that backups are not currently enabled for a Store or that the current environment does not support backups. If this is unexpected, please contact support. * **Retryable** ### CachedSchemaTooOld **Code:** `Internal`\ CachedSchemaTooOld indicates that schema was recently updated and internal caches have not yet caught up. If this problem persists, please contact support. * **Retryable** *This error is immediately retryable.* ### ConcurrentModification **Code:** `Aborted`\ ConcurrentModification indicates the current transaction was aborted because of a non-serializable interaction with another transaction was detected, a stale read was detected, or because attempts to resolve an otherwise serializable interaction have exceeded the maximum number of internal resolution retries. Examples: 1. TransactionA and TransactionB are opened concurrently. TransactionA reads ItemX, puts ItemY. Before transactionA can commit, transactionB writes ItemX and commits. When transactionA tries to commit, it will fail with ConcurrentModification because the read of ItemX in transactionA is no longer valid. That is, the data in ItemX which leads to the decision to put ItemY may have changed, and thus a conflict is detected. 2. TransactionA is opened which writes ItemA with an initialValue field (a field used for ID assignment) — the generated ID is returned to the client. transactionB also performs a on an item which resolves to the same initialValue, transactionB is committed first. Since transactionA may have acted on the generatedID (e.g. written in a different record), it will be aborted because the ID is no longer valid for the item it was intended for. 3. A read or list operation detected that underlying data has changed since the transaction began. * **Retryable** *This error is immediately retryable.* ### ConditionalCheckFailed **Code:** `FailedPrecondition`\ ConditionalCheckFailed indicates that conditions provided to perform an operation were not met. For example, a condition to write an item only if it does not already exist. In the future StatelyDB may provide more information about the failed condition; if this feature is a blocker, please contact support. * **Not Retryable** *Typically a conditional check failure is not retryable* *unless the conditions for the operation are changed.* ### Internal **Code:** `Internal`\ This error indicates a bug in StatelyDB. Stately has been notified about this error and will take necessary actions to correct it. If this problem persists, please contact support. * **Not Retryable** ### InvalidArgument **Code:** `InvalidArgument`\ InvalidArgument indicates something in a request was incorrect or missing. The error message returned should provide more information about the specific issue. If this is not sufficient, please contact support. * **Not Retryable** ### InvalidKeyPath **Code:** `InvalidArgument`\ InvalidKeyPath indicates that the provided key path or key IDs are invalid. Key IDs can be strings, positive integers, uuids or bytes. The error message returned should provide more information about the specific issue. If this is not sufficient, please contact support. * **Not Retryable** ### ItemReusedWithDifferentKeyPath **Code:** `InvalidArgument`\ ItemReusedWithDifferentKeyPath occurs when a client reads an Item, then attempts to write it with a different Key Path. Since writing an Item with a different Key Path will create a new Item, StatelyDB returns this error to prevent accidental copying of Items between different Key Paths. If you intend to move your original Item to a new Key Path, you should delete the original Item and create a new instance of the Item with the new Key Path. If you intend to create a new Item with the same data, you should create a new instance of the Item rather than reusing the read result. * **Not Retryable** ### ItemTypeNotFound **Code:** `InvalidArgument`\ ItemTypeNotFound indicates that the ItemType supplied in a Put request was not found in the current schema or that a provided Key does not match the key of an ItemType in schema. Please ensure that a the schema being used has been published via the Stately CLI `stately schema put`, the client has is using the published version, and that any KeyPaths in use are correctly composed, then try again. * **Not Retryable** ### MarshalFailure **Code:** `Internal`\ MarshalFailure indicates that data transformation failed on our end. * **Not Retryable** ### NonRecoverableTransaction **Code:** `FailedPrecondition`\ NonRecoverableTransaction indicates that conditions required for the transaction to succeed are not possible to meet with the current state of the system. This can occur when an Item has more than one key-path, and is written with a “must not exist” condition (e.g. with ID Generation on one of the keys) but another keys already maps to an existing item in the store. Permitting such a write would result in conflicting state; two independent records with aliases pointing to the same item. * **Not Retryable** ### NotImplemented **Code:** `Unimplemented`\ This error indicates that an operation or feature is not implemented in StatelyDB. Stately has been notified about this error and will take necessary actions to correct it. If this problem persists, please contact support. * **Not Retryable** ### OrganizationNotFound **Code:** `NotFound`\ OrganizationNotFound indicates that an Organization was not found or access to this organization has been denied. Please ensure you have the correct organization ID and permissions. If you believe this is an error, please contact support. * **Not Retryable** *This is not retryable until the organization is created, or access is granted.* ### PermissionDenied **Code:** `PermissionDenied`\ PermissionDenied is a catch-all indicating that the caller cannot do the requested operation. Typically this is an access restriction on an operation or specific API that permits only specific users or roles to perform the operation. If you believe you should have access, please contact support. * **Not Retryable** ### ProjectNotFound **Code:** `NotFound`\ ProjectNotFound indicates that a Project was not found or access to this project has been denied. Please ensure you have the correct project ID and permissions. If you believe this is an error, please contact support. * **Not Retryable** *This is not retryable until the project is created, or access is granted.* ### SchemaHasNoVersions **Code:** `NotFound`\ SchemaHasNoVersions indicates that this schema has no versions yet. Please add a schema version by running `stately schema put` with the Stately CLI, then try again. * **Not Retryable** *This is not retryable until a schema version has been published to the schema.* ### SchemaNotBoundToStore **Code:** `NotFound`\ SchemaNotFound indicates that no schema is bound to the given store ID. Please ensure that a schema has been bound using `stately schema bind` with the Stately CLI, then try again. * **Not Retryable** *This is not retryable until a schema has been bound to the store.* ### SchemaNotFound **Code:** `NotFound`\ SchemaNotFound indicates that the schema with the given ID does not exist or you do not have permission to see it. Please ensure that a schema has been created and your schema ID is correct. * **Not Retryable** *This is not retryable until a schema with that ID exists.* ### SchemaVersionMismatch **Code:** `NotFound`\ SchemaVersionMismatch indicates the list token used on ContinueList was created with a different version of the client than the one sending it. The remedy is to issue a new BeginList request and rebuild any client cached items. * **Not Retryable** ### SchemaVersionNotFound **Code:** `NotFound`\ SchemaNotFound indicates that no schema was found with this ID and version. Please ensure you have specified your schema ID and version correctly. * **Not Retryable** *This is not retryable until that specific schema version has been published to the schema.* ### SignatureInvalid **Code:** `InvalidArgument`\ SignatureInvalid indicates the signature for a signed payload, such as a ListToken’s token\_data, did not match its key. This may be due to tampering or corruption. Please do not tamper with opaque tokens. To resolve this it may be necessary to create a new token from the original source; for example by starting a new BeginList operation. * **Not Retryable** ### StoreInUse **Code:** `Unavailable`\ StoreInUse indicates that the underlying Store is currently in being updated and cannot be modified until the operation in progress has completed. * **Retryable** *This can be retried with backoff.* ### StoreNotFound **Code:** `NotFound`\ The store requested was not found or access to this store has been denied. Please ensure you have the correct store ID and permissions. If you believe this is an error, please contact support. * **Not Retryable** *This is not retryable until the organization is created, or access is granted.* ### StoreRequestLimitExceeded **Code:** `ResourceExhausted`\ StoreRequestLimitExceeded indicates that an attempt to modify a Store has been temporarily rejected due to exceeding global modification limits. StatelyDB has been notified about this error and will take necessary actions to correct it. In the event that the issue has not been resolved, please contact support. * **Retryable** ### StoreThroughputExceeded **Code:** `ResourceExhausted`\ StoreThroughputExceeded indicates that the underlying Store does not have resources to complete the request. This may indicate a request rate is too high to a specific Group or that a sudden burst of traffic has exceeded a Store’s provisioned capacity. * **Retryable** *With an exponential backoff.* ### StreamClosed **Code:** `FailedPrecondition`\ StreamClosed indicates that the client or server tried to use a stream which has already closed. This is an unexpected error and may indicate a bug in application code using the stream. * **Not Retryable** ### TransactionTooLarge **Code:** `InvalidArgument`\ TransactionTooLarge indicates that the aggregate size of a transaction is too large. This error is returned when the total number of items to track, including reads, puts, deletes, and updates derived from secondary item KeyPaths or changes to secondary key paths exceeds 100 items. OR when the aggregate size of all items to write (in bytes) exceeds 4MB. The error message returned should provide more information about which case was hit. To resolve this error, please reduce the size of the transaction and try again. * **Not Retryable** ### Unauthenticated **Code:** `Unauthenticated`\ Unauthenticated is returned whenever an authentication token is missing or malformed. This error can be resolved by logging in and/or refreshing a token. In most SDK clients, this error can be automatically resolved by retrying a request or manually refreshing a token token. In the CLI this error can be resolved by running `stately login`. * **Not Retryable** ### Unknown **Code:** `Unknown`\ This error indicates that an unknown error occurred. * **Not Retryable** ### UserAlreadyExists **Code:** `AlreadyExists`\ UserAlreadyExists indicates that an attempt to create a user failed because a user with a given oauth subject has already been created. If you believe this is an error, please contact support. * **Not Retryable** ### UserNotFound **Code:** `PermissionDenied`\ UserNotfound indicates that a caller does not exist in system or the caller does not have access to the requested resource. If you are sure that this user exists that this user exists and that you should have access, please contact support. * **Not Retryable** ### UserNotFoundForWhoami **Code:** `NotFound`\ UserNotFoundForWhoami is only returned from the whoami endpoint. It indicates that Stately has no record of the the user. This can often be fixed by simply logging into to trigger a refresh. If that fails, please contact support. * **Not Retryable** *Not until the user has been created.* ### WrongRegion **Code:** `InvalidArgument`\ WrongRegion indicates that the request involved a store that lives in a different region than the API you called. Please make sure to call the API endpoint that matches the region of your store. * **Not Retryable** ## Allow Stale Reads (api/allow-stale) ``` ``` ## Eventual Consistency Each client can be put into “allow stale” mode, which enables “eventually consistent reads”. Calling a “with allow stale” method will return a new copy of the client that allows stale reads, without modifying the original client. By default, all reads in StatelyDB are strongly consistent—if you write some data and read it back after (even from another client), you’ll see your update. “allow stale” says that you accept a bit of staleness in reads. Many applications are actually quite tolerant of this and don’t need to get the exact up to date view of data. The advantage of enabling this setting is a potential improvement in both latency and cost - it’s cheaper and easier to read without having to make sure the write has already reached consensus within a database cluster. This isn’t available within a transaction because transactions are *always* consistent—that’s the whole point of them! For example, imagine you have stored some global settings in an Item, and your clients occasionally poll to refresh their view of those settings. Settings occasionally get updated by an administrative user. It doesn’t *really* matter that each client gets the exact up to date version of the setting versus one that could be a little out of date, because the change is happening infrequently, and because you’re polling the value you already accept that the client’s view of the setting could be a bit out of date. (P.S. use [SyncList](/api/sync) if you want to do this sort of polling). ### Go ``` item, err := client.WithAllowStale(true).Get(ctx, keyPath) ``` ### Ruby ``` item = client.with_allow_stale(true).get(key_path) ``` ### Python ``` item = await client.with_allow_stale(true).get(Movie, key_path) ``` ### TypeScript ``` item = await client.withAllowStale(true).get('Movie', keyPath) ``` # Client SDKs ## JavaScript (TypeScript) (clients/javascript) Our [API Reference](/api/put) has generic examples in every supported language, and we strive to make the experience of each SDK very similar. However, there are some things specific to the JavaScript SDK that we want to call out here. ## NodeJS Only The JavaScript SDK is meant for NodeJS backend applications, and cannot run in a browser. ## ES Modules The generated code uses ES Modules, with import paths that include the `.js` file extension. You may need to configure your tools to recognize this code as ESM. ## Creating Item Objects The generated code binds a client to the types in your schema, so it can validate that you’re always passing the right types in. There are TypeScript types available for all of the item, object, and enum types you defined in your schema. However, you need to use the `create` method to correctly initialize new objects of a particular type: ``` const movie = client.create("Movie", { title: "Starship Troopers 2", year: 2004n, genre: "Sci-Fi", duration: 7880n, rating: "R", }); ``` This is necessary because `create` stamps some additional information in a `$typeName` property, which is then used by the client to properly serialize and deserialize items. Note that the type name argument to `create` is also based on your schema and will only allow the item type names you’ve defined. ## Checking an Item’s Type Since JavaScript doesn’t have types at runtime, and Stately generated objects aren’t classes, we need a special function to check the type of an item: ``` // out here, item is type AnyItem if (client.isType(item, "Movie")) { // Within this branch the item is Movie console.log("Movie:", item.title); } else if (client.isType(item, "Character")) { // Within this branch the item is Character console.log("Character:", item.name); } ``` ## Key Path Helper The `keyPath` tagged string literal function can be used to generate key paths while correctly formatting IDs (especially UUIDs): ``` //... const kp = keyPath`/movie-${movieId}/actor-${actorId}`; ``` ## Protobuf The objects generated from your schema use the [`@bufbuild/protobuf-es`](https://github.com/bufbuild/protobuf-es) library to serialize and deserialize. You’ll need to make sure your code has a dependency on `@bufbuild/protobuf-es`, and you may find the types and helpers in that library useful for working with Stately objects. ## BigInts JavaScript has very limited support for numbers compared to other languages. The `number` type in JavaScript is always a 64-bit floating-point number, which means it can’t hold all the values of a 64-bit integer. However, 64-bit integers are used frequently in Stately schema. These get translated in generated code to [BigInt](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) which *can* represent 64-bit integers. These are unfortunately not that easy to work with - literals must be suffixed with `n` (e.g. `1234n`), and common functions like `JSON.stringify` blow up when encountering BigInts. For now, you can use our `int32` and `uint32` types in fields to force some numbers to a range that will fit in `number`, or just manually deal with the BigInts. ## UUIDs UUIDs are represented as 16-element Uint8Arrays. You can use the `uuid` [package](https://www.npmjs.com/package/uuid) to convert between these and the more common string form. ## Async Iterables The List APIs return Async Iterables, which allow you to handle results as they stream back from the server. You can use the [`for await...of`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of) syntax to handle them: ``` for await (const item of client.beginList(prefix)) { // handle an item... } ``` ## Unwanted Imports VSCode may try to import the constants you export from your schema files, instead of the generated files. Set the preference `typescript.preferences.autoImportSpecifierExcludeRegexes` to exclude your schema directory: .vscode/settings.json ``` { "typescript.preferences.autoImportSpecifierExcludeRegexes": [ "\/schema\/" ] } ``` ## Python (clients/python) Our [API Reference](/api/put) has generic examples in every supported language, and we strive to make the experience of each SDK very similar. However, there are some things specific to the Python SDK that we want to call out here. ## Types All of the generated Python code includes type hints, which should help your editor get the most out of the code. ## Async/Await Python client SDK code uses async/await. You’ll need to use `asyncio` to run its methods. ## Key Path Helper The `key_path` function can be used to format an ID value (especially a UUID) correctly to include in a key path: ``` from statelydb import key_path kp = key_path("/movie-{id}/actor-{actor_id}", id=result.id, actor_id=actor_id) ``` ## Checking an Item’s Type Many client APIs return a list of items, but you want to know exactly what type each item is. ``` if isinstance(item, Movie): print(f"[Movie] title: {item.title}") elif isinstance(item, Character): print(f"[Character] name: {item.name}") ``` ## UUIDs UUIDs are represented as the [`uuid.UUID`](https://docs.python.org/3/library/uuid.html) type. ## Go (clients/go) [![Go Reference](https://pkg.go.dev/badge/github.com/StatelyCloud/go-sdk.svg)](https://pkg.go.dev/github.com/StatelyCloud/go-sdk) Our [API Reference](/api/put) has generic examples in every supported language, and we strive to make the experience of each SDK very similar. However, there are some things specific to the Go SDK that we want to call out here. ## Key Path Helper The `github.com/StatelyCloud/go-sdk/stately.ToKeyID(value)` function can be used to format an ID value (especially a UUID) correctly to include in a key path: ``` kp := "/movie-"+stately.ToKeyID(movieID)+"/actor-"+stately.ToKeyID(actorID) ``` The `ToKeyID` helper has generic support for types `string`, `[]byte`, `[16]byte`, `uint64`, `uint32`, `int64`, and `uint64`. When using a typed version of these it may be necessary to convert to one of these types first. See examples below: A typed primitive: ``` type UserID uint64 userID := UserID(123) kp := "/userID-"+stately.ToKeyID(uint64(userID)) type EmailAddress string email := EmailAddress("examples@stately.cloud") kp := "/email-"+stately.ToKeyID(string(email)) ``` [`github.com/gofrs/uuid/v5`](https://pkg.go.dev/github.com/gofrs/uuid/v5) UUID: ``` moveID := uuid.Must(uuid.NewV4()) kp := "/movie-"+stately.ToKeyID(id[:]) kp := "/movie-"+stately.ToKeyID(id.Bytes()) kp := "/movie-"+stately.ToKeyID([16]byte(id)) ``` [`github.com/google/uuid`](https://pkg.go.dev/github.com/google/UUID) UUID: ``` moveID := uuid.New() kp := "/movie-"+stately.ToKeyID(id[:]) kp := "/movie-"+stately.ToKeyID([16]byte(id)) ``` ## Checking an Item’s Type Many client APIs return a `*stately.Item`, but you want to know exactly what type it is. You can use standard Go type checks for this: ``` if movie, ok := item.(*schema.Movie); ok { // it's a movie } switch v := item.(type) { case *schema.Movie: // it's a movie case *schema.Character: // it's a character } ``` ## Protobuf The Go types generated from your schema expose each of their fields as properties, but also have a method version that is nil-safe. For example if you have a `name` field, the Go object will have a `Name` property and a `GetName()` method. The method version will return the zero value (in this case an empty string) even if called on a nil pointer. ## UUIDs The Go code generator will automatically choose which package to use for UUID-typed fields based on what is available in the `go.mod` file in the output directory. This allows you to use you preferred UUID package with the generated code. The generator can currently detect and use [`github.com/google/uuid`](https://pkg.go.dev/github.com/google/UUID), \[`github.com/gofrs/uuid`], or [`github.com/satori/go.uuid`](https://pkg.go.dev/github.com/satori/go.uuid). If you don’t already have a UUID package specified, it defaults to `github.com/google/uuid`. ## Errors Clients can throw errors, which will all be of the type `*sdkerror.Error`. These errors have an associated `StatelyCode` which can give more details about the error, plus a higher-level [gRPC/Connect error code](https://connectrpc.com/docs/protocol#error-codes) that can be used to generally group errors into different categories. There is also a helper function `sdkerror.Is` which lets you quickly check if any error matches a StatelyCode: ``` if sdkerror.Is(err, sdkerror.StoreRequestLimitExceeded) { // handle } ``` The `github.com/StatelyCloud/go-sdk/sdkerror` package includes a handful of defined error codes under the `StatelyErrorCode` type that cover common error cases that you might need to handle in your own code, such as “ConcurrentModification” or “StoreThroughputExceeded”. Each of these are documented with what they mean and how to handle them. ## Ruby (clients/ruby) Our [API Reference](/api/put) has generic examples in every supported language, and we strive to make the experience of each SDK very similar. However, there are some things specific to the Ruby SDK that we want to call out here. ## Types We provide types for both our Ruby SDK and generated Ruby code in both [RBS](https://github.com/ruby/rbs) and [Sorbet](https://sorbet.org) flavors. ## Key Path Helpers The `StatelyDB::KeyPath` class can be used to safely construct key paths: ``` key_path = StatelyDB::KeyPath.with('movie', result.id) ``` This will produce a key path like `/movie-p05olqcFSfC6zOqEojxT9g`, correctly encoding a UUID ID. You can append more segments onto an existing KeyPath using `with`: ``` key_path = StatelyDB::KeyPath.with('movie', result.id).with('actor', actor.id) ``` ## Checking an Item’s Type Many client APIs return a list of items, but you want to know exactly what type each item is. ``` if item.is_a? StatelyDB::Types::Movie # it's a movie end case item when StatelyDB::Types::Movie # it's a movie when StatelyDB::Types::Character # it's a character end ``` ## UUIDs UUIDs use the `StatelyDB::UUID` type, which can be constructed from a standard 36-byte UUID string using `StatelyDB::UUID.parse(input)`. `to_s` produces that output, while `to_base64` produces the Stately base64 version. ## Enums Enums are generated as modules under `StatelyDB::Types::Enum`. Each enum module has a constant for each value, plus a `from_int` class method that can be used to validate that a value exists in the enum. # Deployment ## Hosted vs. BYOC (deployment) StatelyDB has two major deployment options—serverless, and BYOC (Bring Your Own Cloud). Either way, it’s using DynamoDB in AWS underneath to store your data, so the choice is mainly about where the DynamoDB table lives. ## Hosted (Serverless) The [hosted, serverless](/deployment/serverless) option is the fastest to get started with. Stately hosts the table for you, and it’s a real DynamoDB table underneath, immediately ready for production workloads. You can create a new store from our [web console](https://console.stately.cloud) and start using it right away. # Bring Your Own Cloud The other option is BYOC (Bring Your Own Cloud), which may be more appealing if you want to ensure that data never leaves your own AWS account. In this model, you run the StatelyDB data plane container in your own [Kubernetes](/deployment/byoc) or [ECS](/deployment/byoc) clusters, and connect to a StatelyDB-formatted DynamoDB table in your own AWS account. The data plane connects to a control plane hosted by Stately, but that connection is only for schema metadata and task orchestration—there is no way for Stately to access your data. And because you’re using DynamoDB in your own AWS account, you get to use as much of your AWS committed spend or discounts as you want. We feel like this BYOC model offers the best balance of data security and cost effectiveness for larger enterprise users. ## Hosted (Serverless) (deployment/serverless) The serverless, hosted version of StatelyDB is completely managed by Stately. To get started you only need to click “New Store” in the [console](https://console.stately.cloud), and we’ll create a new DynamoDB table behind the scenes that’s ready for you to use. Follow the [getting started guide](/guides/connect) to connect to your new store. In this model, Stately handles the DynamoDB table for you, and while it starts out with a relatively low limit on reads and writes, you can [contact support](mailto:support@stately.cloud) to get that limit lifted. These stores are backed by real DynamoDB, and can scale to millions of requests per second! ![](/_astro/serverless-diagram.Cf_stjG2_rAKiP.svg) ## Bring Your Own Cloud (BYOC) (deployment/byoc) In the Bring Your Own Cloud (BYOC) model, you run the Stately data plane container next to your services, and that data plane container directly talks to DynamoDB in the same account. Stately Cloud doesn’t have any access to your AWS account (even through the data plane)—your DynamoDB table is entirely under your control and data cannot leave your account without you taking action to allow it. The data plane containers connect to the hosted StatelyDB control plane to read information about your schema versions and to coordinate data maintenance tasks. ![](/_astro/byoc-diagram.C94ZkGR1_sydXo.svg) Unlike the [serverless, hosted option](/deployment/serverless), the BYOC deployment model does require more setup than a single click. ## Create your DynamoDB table First, make sure you have the AWS CLI installed, and that you’ve logged in to the correct region and account. Then download [`create-table.sh`](/create-table.sh), make it executable, and run it. The table name doesn’t matter—choose whatever you want. This will use CloudFormation to create your table. ``` curl https://docs.stately.cloud/create-table.sh \ -o ./create-table.sh && chmod a+x ./create-table.sh ./create-table.sh mycooltable ``` Alternately, you can download the [CloudFormation template](/table-template.yaml) and use it directly. ## Set up IAM permissions You’ll need to make sure that the data plane has the ability to talk to DynamoDB. We recommend attaching a StatelyDB specific permissions policy to your service’s IAM Role. The minimum requirements are: ``` aws iam create-policy --policy-name StatelyDBDynamoReadWriteAccess \ --policy-document '{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "dynamodb:*Item", "dynamodb:Describe*", "dynamodb:List*", "dynamodb:Query", "dynamodb:Scan" ], "Resource" : "*" }]}' ``` ## Provision a Store in the Stately Console Visit the [console](https://console.stately.cloud) and click “New Store”. Fill in the name and description, then select the “Bring Your Own Cloud - You’ve already created a table in your AWS account” option and enter the full ARN of the DynamoDB table you created. ## Provision an Access Key for the Data Plane In the [console](https://console.stately.cloud), click “Access Keys” and click “New Access Key”. Choose the “Data Plane Key for BYOC” option, and then save the key somewhere. You’ll need it in the next step. ## Add the StatelyDB Data Plane to your service deployment Now you need to run the StatelyDB data plane container next to your service. We have examples for Kubernetes (EKS) and AWS ECS: ### Kubernetes The most straightforward way to deploy the StatelyDB dataplane in Kubernetes is as a sidecar in your existing service pods. Each of your service pods will have a private StatelyDB instance they can talk to. The data plane sidecar then talks to DynamoDB on your behalf. You can see [an example deployment YAML here](/sidecar.yaml) - the important part is the “statelydb” container and its configuration: deployment.yaml ``` containers: ... your other containers - name: statelydb image: public.ecr.aws/stately/dataplane:latest ports: - containerPort: 3030 name: h2c # HTTP/2 cleartext env: # Read the data plane access key from a k8s secret. - name: STATELY_ACCESS_KEY valueFrom: secretKeyRef: name: statelydb-secret key: STATELY_ACCESS_KEY resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "256Mi" livenessProbe: httpGet: path: /health port: 3030 initialDelaySeconds: 5 periodSeconds: 10 ``` You can choose the memory and CPU limits depending on your workload - StatelyDB can work well with a relatively small amount of memory, but you’ll want to tweak this depending on your workload. The most important bit is passing in the `STATELY_ACCESS_KEY` environment variable - you can use Kubernetes secrets or whatever other secrets management system you like. This should be set to the access key you generated above. You also must make sure that your pod has access to AWS credentials, and that its role has the permissions listed above. [EKS Pod Identity](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html) is a great way to do that, as it binds an IAM role directly to a pod. Keep in mind that if you use a different way of providing IAM role or user credentials to your pod, you may need to set a `REGION` environment variable on the StatelyDB container to the current AWS region so it knows where it’s running. ### AWS ECS (Elastic Container Service) If you use ECS instead of Kubernetes, the setup is still pretty similar. You’ll deploy the StatelyDB data plane container next to your service container in the same task, so each service container gets its own private instance of the data plane. Then the data plane container talks directly to DynamoDB. You can see [an example task definition here](/task-definition.json) - the important part is the “statelydb” container and its configuration: task-definition.json ``` { ... "containerDefinitions": [ ... your service container ... { "name": "statelydb", "image": "public.ecr.aws/stately/dataplane:latest", "essential": true, "portMappings": [ { "containerPort": 3030, "protocol": "tcp" } ], "environment": [], "secrets": [ { "name": "STATELY_ACCESS_KEY", "valueFrom": "arn:aws:ssm:us-west-2:509869530682:parameter/statelydb/access_key" } ], "memoryReservation": 256, "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "mycoolservice", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "statelydb" } } } ] } ``` You must make sure that your task role (in `taskRoleArn`) has the policy you created earlier attached to it. You also need to provide the `STATELY_ACCESS_KEY` you created earlier to the container - in this example we’ve used the [AWS Systems Manager Parameters Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) to manage the key. ## Configure your StatelyDB SDK Now you configure the StatelyDB SDK in your service code. Since the StatelyDB data plane is running in a local container, you need to set the endpoint to `http://localhost:3030`. You should also turn off authentication since there’s no need for the client to authenticate to a local data plane. ### Go ``` package main import ( "context" // The StatelyDB SDK "github.com/StatelyCloud/go-sdk/stately" // This is the code you generated from schema "github.com/StatelyCloud/stately/go-sdk-sample/schema" ) // Create a client for interacting with a Store. func makeClientBYOC() stately.Client { // Use the generated "NewClient" func in your schema package. return schema.NewClient( context.TODO(), 12345, // Store ID &stately.Options{ Endpoint: "http://localhost:3030", NoAuth: true, }, ) } ``` ### Ruby ``` # This is the code you generated from schema require_relative "./schema/stately" client = StatelyDB::Client.new( store_id: 12345, endpoint: "http://localhost:3030", no_auth: true, ) ``` ### Python ``` # The StatelyDB SDK # Import from the package that you generated with `stately generate`. # We've used a relative import here for convenience but for you it might # be different. from .schema import Client client = Client( store_id=12345, endpoint="http://localhost:3030", no_auth=True, ) ``` ### TypeScript ``` // The StatelyDB SDK // This is the code you generated from schema // Create a Data client for interacting with a Store const client = createClient( 12345, // Store ID { endpoint: "http://localhost:3030", noAuth: true, }, ); ``` ## Observability The StatelyDB data plane container produces structured logs per request to `stdout`. It also can produce OpenTelemetry (OTEL) compatible metrics and traces to help with debugging issues - [let us know](mailto:support@stately.cloud) if you’re interested in consuming these into your own metrics and tracing system. Of course, your DynamoDB table will produce metrics that you can view and alarm on using CloudWatch—there’s no difference there from running your own table. # CLI To install the `stately` CLI, follow [the Getting Started guide](/guides/getting-started/#download-the-stately-cli). You can upgrade the CLI to the latest version with `stately upgrade`. The CLI has built-in help (run `stately help`), but notable commands are also explained here. ## Authentication `stately login` allows you to log in to your Stately account. Once you log in, subsequent commands will use this account until you run `stately logout`. Note that there is currently no way to run the CLI using the Client ID and Client Secret of a Service Account. ## `whoami` `stately whoami` prints information about the currently logged in user and all of the resources they have available to them. This is a quick way to get ahold of Store IDs or Schema IDs, which are useful in other commands such as updating schema. ## `schema` The `stately schema` subcommands allow you to create and update schema versions, as well as generating language-specific client code. ### `bind` Stately Support should have created a Schema with your first [Store](/concepts/stores). To find the list of schemas in your organization, you can use `stately whoami`. From there, you can call `stately schema bind` to bind the Schema to one of your Stores. ``` stately schema bind --schema-id --store-id ``` ### `init` `stately schema init` sets up a NodeJS package containing your schema definition TypeScript files. The only argument is the directory path you want to create. The directory must not exist already. ``` stately schema init ./schema ``` ### `generate` `stately schema generate` will run your schema TypeScript files and then create code in one of our supported SDK languages that contains typed objects corresponding to the types in your schema. See [Generating Client Code](/schema/generate) for more info on how to use this. ### `put` `stately schema put` publishes a new version of your schema. This will also run your schema TypeScript files. For this to run, you need to have NodeJS installed, and have installed the dependencies for your schema package with `npm install`. The first positional argument is the path to your schema’s index file (you can have schema in as many TypeScript files as you want as long as there’s a single file that exports everything). ``` stately schema put --schema-id --message "A schema update!" path/to/schema.ts ``` `stately schema put` will fail if you are making a backwards-incompatible change to your schema. You can add the `--allow-backwards-incompatible` argument to override this, but realize you’re taking the validity of your stored data into your own hands at that point. Eventually we will remove the `--allow-backwards-incompatible` option when Elastic Schema can handle all types of changes—in the meantime, it is sometimes necessary to override the backwards compatibility check when you know it’s safe. ### `print` / `validate` `stately schema print` and `stately schema validate` are very similar. Both run your schema TypeScript files. For this to run, you need to have NodeJS installed, and have installed the dependencies for your schema package with `npm install`. `validate` will check to make sure your schema is valid. It will exit with a nonzero exit code if the schema is invalid. This command does not require that you specify a store and it does not check for backwards compatibility with an existing schema. `print` is exactly the same as `validate`, except it prints the resolved schema out in a sort of concise, readable format. ### `update` `stately schema update` will update your schema package’s dependency on `@stately-cloud/schema` to the latest version. We’re making changes to the schema builder library frequently, so make sure to stay up to date! ## `item` The `stately item` subcommands allow for updating or fetching items from the CLI. We strongly recommend using our SDKs instead, since they let you work with typed objects instead of JSON, but sometimes you just have to write a script. The operations mirror the StatelyDB APIs: [`get`](/api/get), [`put`](/api/put), [`delete`](/api/delete), and [`list`](/api/list). The API reference has examples of using the CLI to perform each of these.