Zoho Catalyst’s Data Store and ZCQL are already powerful for building serverless applications on top of a fully managed relational database.
However, as applications grow in complexity and move through multiple environments (development, staging, production, tenant-specific workspaces, etc.), a console-only approach to schema management becomes a bottleneck.
According to the current documentation and knowledge base:
ZCQL supports data manipulation operations only (DML). There is no support for CREATE TABLE, ALTER TABLE, DROP TABLE, or other Data Definition Language (DDL) statements.
Catalyst support has confirmed that tables cannot be created via code; tables and their schemas must be created in the Catalyst Console, while data can then be inserted/queried via code.
Official tutorials consistently instruct developers to manually create tables and columns in the Data Store UI before wiring up SDKs and ZCQL in their application code.
This means Catalyst currently does not support a “schema as code” or “migration” approach that many modern backend/serverless platforms provide.
For simple demos, manual table creation in the console is acceptable. But in real projects, this introduces major pain points:
Multi-environment setups (dev / staging / production)
Every environment must have the same schema.
Right now, this means someone has to recreate tables and columns by hand, or rely on fragile manual documentation.
Any typo or mismatch can cause runtime failures that are hard to detect early.
Team collaboration and onboarding
New developers or partners must be told: “Log in to Catalyst, click here, create this table, add these columns…”
This is error-prone, not repeatable, and not easily reviewed (no Git history or code review around schema changes).
Continuous Delivery and Automated Deployments
CI/CD pipelines can deploy code and functions, but cannot reliably apply schema changes, because there is no first-class, scriptable mechanism to create or modify tables.
This blocks stronger DevOps practices such as “one-click environment setup” or automated test environment spin-up.
Tenant-specific or dynamic schemas
Some applications need to provision tables at runtime (for example, per tenant, per customer space, or per module).
Today this is impossible without manual intervention, which breaks the idea of fully automated onboarding or self-service provisioning.
Auditability and reproducibility
Schema changes done via UI are not easy to track or roll back.
There is no built-in way to version schema changes, apply them forward, or revert them, similar to how code migrations work in ORM frameworks.
In short: the lack of scriptable schema management is now one of the main constraints preventing larger, more complex systems from fully standardizing on Zoho Catalyst.
I’d like to propose a set of enhancements that would make Catalyst’s Data Store much more powerful for serious application development:
Extend ZCQL to support a safe subset of DDL for Data Store tables, for example:
CREATE TABLE
Define table name
Define columns, data types, constraints (NOT NULL, UNIQUE, default values, etc.)
ALTER TABLE
Add, rename, or drop columns
Modify column data types where feasible
DROP TABLE
Optionally with safety flags or soft-deletion
These DDL commands could be:
Executed from:
Functions (Node.js, Java, Python, etc.) via SDK executeZCQLQuery()
The ZCQL Console (for quick testing)
Controlled by permissions/scopes:
Only allowed for project owners or for functions with specific roles/scopes.
Perhaps disabled by default in production unless explicitly enabled.
This would immediately allow developers to codify their schemas and apply them through deployment scripts.
In addition or as an alternative to DDL in ZCQL, Catalyst could offer dedicated schema management APIs, for example:
REST endpoints such as:
POST /datastore/tables – create a table with a JSON schema definition
PATCH /datastore/tables/{table_name} – alter a table
DELETE /datastore/tables/{table_name} – drop a table (with safeguards)
SDK wrappers:
catalyst.dataStore.createTable(schemaDefinition)
catalyst.dataStore.alterTable(tableName, changes)
catalyst.dataStore.dropTable(tableName, options)
The schema definition could be a JSON/YAML structure describing:
Table name
Columns (name, data type, length, flags like mandatory/unique/encrypted)
Relationships (if/when Data Store supports foreign key-like references)
Indexes (now or in future)
This would unlock:
Schema as code: store schema files in Git, review via pull requests.
Automated environment setup: a single script/command can spin up a new environment (dev, staging, test) with identical schema.
Programmatic multi-tenant provisioning: create per-tenant tables automatically on customer signup.
On top of DDL or APIs, a simple migration mechanism would be extremely valuable:
A migration file format like:
2025_01_01_001_create_users_table.zcql
2025_01_10_002_add_status_to_orders.json
A small migration runner that:
Track which migrations have been applied.
Applies new migrations in order.
Rolls back where possible (or at least fails-early with clear logs).
This doesn’t need to be as complex as full ORM frameworks, but even a minimal built-in migration runner, or recommended pattern, would considerably improve developer experience.
Introducing schema changes via code naturally raises concerns about safety. Here are some ideas to mitigate risk:
Role-based access control
Only project owners / admins can run schema-changing code in production.
Separate scopes for “DML only” vs “DDL allowed”.
Environment protections
Ability to disable DDL in production by default and enable it only when required.
Optional approval step in the console for dangerous operations (like dropping tables).
Audit logs
Log every schema change (who, when, what DDL/API) with before/after snapshots in the Catalyst Logs section.
Dry-run mode
API / SDK support for “dry run” which shows what will change but doesn’t apply it.
These measures preserve the stability of production while still allowing scalable, automated schema management.
Adding programmatic schema management will:
Reduce onboarding time for new projects and new developers.
Improve reliability of deployments across environments (less human error).
Enable more sophisticated architectures (multi-tenant apps, dynamically generated modules, rapid PoC → production flows).
Make Catalyst more competitive against other serverless / BaaS platforms that already support migrations and schema-as-code patterns.
Encourage partners and agencies to standardize on Catalyst for more of their backend workloads, since infrastructure can be fully automated.
In short, this feature would dramatically improve the developer experience, scalability, and maintainability of Catalyst-based solutions.
Right now, Catalyst’s Data Store and ZCQL are excellent for data operations, but the inability to create and evolve schemas via script is a key missing piece.
I hope you will consider:
Adding DDL support in ZCQL and/or
Providing schema management APIs & SDK methods, and eventually
Offering a lightweight migrations framework.
This would align Catalyst with modern “infrastructure as code” and “schema as code” practices and unlock a lot of advanced use cases for teams building serious production systems on Zoho Catalyst.