SQL Server Data Compression – ROW vs PAGE vs COLUMNSTORE | DbSchema

Table of Contents
- What SQL Server data compression does
- Edition and version notes
- ROW vs PAGE vs COLUMNSTORE
- Illustrative benchmark results
- Estimate savings before changing production
- Apply compression in sqlcmd
- Use compression with DbSchema
- FAQ
- Conclusion
SQL Server data compression reduces the size of tables and indexes so SQL Server reads fewer pages from disk and memory. That can improve I/O-heavy workloads, but it also adds CPU cost during compression and decompression.
For most teams, the real question is not whether compression exists, but which option fits the workload: ROW, PAGE, or COLUMNSTORE. DbSchema helps you evaluate that decision in context because you can inspect the schema, annotate hot and cold tables, and keep the resulting changes documented.
What SQL Server data compression does
Compression is commonly used on:
- large rowstore tables
- nonclustered indexes
- historical partitions
- analytics tables using columnstore storage
In practice, compression is most valuable when storage cost and I/O pressure matter more than a small increase in CPU usage. That is why it often appears in enterprise systems that store years of orders, logs, telemetry, or financial history.
If the table is also large enough to benefit from partitioning, the two features work well together because you can compress older partitions more aggressively than current ones.
Edition and version notes
- Starting with SQL Server 2016 SP1, data compression is available in all editions.
- In older SQL Server versions, row and page compression were more limited by edition.
- Compression can be applied at the table, index, and partition level.
- Columnstore objects are already compressed by design, and older data can also use archival variants when appropriate.
That means modern SQL Server environments can usually evaluate compression without assuming an Enterprise-only feature set.
ROW vs PAGE vs COLUMNSTORE
| Compression type | How it works | Best fit | Watch out for |
|---|---|---|---|
ROW | stores fixed-length values more efficiently at the row level | OLTP tables with many writes | lower savings than page compression |
PAGE | includes row compression plus prefix and dictionary compression per page | read-heavy tables, archive tables, repetitive data | higher CPU overhead on writes |
COLUMNSTORE | stores data by column and compresses batches very efficiently | reporting, analytics, large scans | not a direct replacement for every OLTP table |
When to prefer ROW compression
Choose ROW compression when:
- the table is write-heavy
- rows contain many fixed-length values, zeros, or null-like patterns
- you want moderate savings with lower CPU overhead
When to prefer PAGE compression
Choose PAGE compression when:
- the data is repetitive across many rows
- the workload is read-heavy or mostly historical
- the extra CPU cost is acceptable in exchange for more storage savings
When to prefer COLUMNSTORE
Choose columnstore when:
- the table behaves like a fact table or reporting dataset
- large scans, aggregations, and batch analytics dominate
- you want both compression and a different execution pattern for analytics
For broader design decisions, see SQL Server: How to Create a Table and SQL Server: How to Create an Index.
Illustrative benchmark results
Every table compresses differently, but the pattern below is typical for a pilot test on a historical order table with repetitive values.
| Test case | Size before | Size after | Space saved | Likely workload fit |
|---|---|---|---|---|
| no compression | 48 GB | 48 GB | 0% | baseline |
ROW compression | 48 GB | 36 GB | 25% | mixed OLTP and reporting |
PAGE compression | 48 GB | 27 GB | 44% | read-heavy archive data |
| clustered columnstore | 48 GB | 9 GB | 81% | analytics and BI scans |
These numbers are illustrative, not guaranteed. Actual savings depend on column types, repeated values, index design, and update frequency. The right workflow is to estimate first, test on a staging copy, then roll out object by object.
Estimate savings before changing production
Use sp_estimate_data_compression_savings before rebuilding a large table or index:
EXEC sp_estimate_data_compression_savings
@schema_name = 'dbo',
@object_name = 'OrderHistory',
@index_id = 1,
@partition_number = NULL,
@data_compression = 'PAGE';
For enterprise workloads, compare more than one option:
EXEC sp_estimate_data_compression_savings
@schema_name = 'dbo',
@object_name = 'OrderHistory',
@index_id = 1,
@partition_number = NULL,
@data_compression = 'ROW';
EXEC sp_estimate_data_compression_savings
@schema_name = 'dbo',
@object_name = 'OrderHistory',
@index_id = 1,
@partition_number = NULL,
@data_compression = 'PAGE';
This gives you a fact-based starting point before scheduling a maintenance window.
Apply compression in sqlcmd
Apply ROW compression to a table
ALTER TABLE dbo.OrderHistory
REBUILD PARTITION = ALL
WITH (DATA_COMPRESSION = ROW);
Apply PAGE compression to an index
ALTER INDEX IX_OrderHistory_OrderDate
ON dbo.OrderHistory
REBUILD PARTITION = ALL
WITH (DATA_COMPRESSION = PAGE);
Compress only older partitions
This is a common enterprise pattern: keep the newest partition lighter for writes and compress older ones more aggressively.
ALTER INDEX IX_OrderHistory_OrderDate
ON dbo.OrderHistory
REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = PAGE);
ALTER INDEX IX_OrderHistory_OrderDate
ON dbo.OrderHistory
REBUILD PARTITION = 12 WITH (DATA_COMPRESSION = ROW);
Use columnstore for analytics
CREATE CLUSTERED COLUMNSTORE INDEX CCI_SalesFact
ON dbo.SalesFact;
If your design is headed toward archive-heavy workloads or large warehouse tables, compression should be considered alongside database sharding and partition design, not as an afterthought.
Use compression with DbSchema
DbSchema helps when you want compression decisions to stay visible to the rest of the team, not buried in ad-hoc scripts.
Typical workflow:
- connect through the SQL Server JDBC driver
- inspect candidate tables and indexes in the diagram and schema tree
- run estimation or rebuild statements in the SQL editor
- document why a table uses
ROW,PAGE, or columnstore in schema documentation - deploy related schema changes through schema synchronization
DbSchema is especially helpful when compression is part of a broader release that also changes indexes, partitions, or archive tables. You can review those related objects visually instead of applying storage changes blind.
FAQ
Which is better in SQL Server: ROW or PAGE compression?
ROW is usually safer for write-heavy OLTP tables. PAGE often saves more space for read-heavy or historical tables with repeated values.
Is SQL Server data compression available outside Enterprise Edition?
Yes. In modern SQL Server versions starting with 2016 SP1, data compression is available in all editions.
Does compression always improve performance?
No. It often improves I/O-heavy reads, but can add CPU overhead, especially on frequent updates and inserts. Always test the workload, not just the storage savings.
Can I use different compression on different partitions?
Yes. That is a common pattern for hot-vs-cold data management in large SQL Server tables.
Should I use columnstore instead of page compression?
Only if the table behaves like an analytics or reporting table. Columnstore is excellent for scans and aggregations, but it is not a direct replacement for every OLTP rowstore table.
Conclusion
SQL Server data compression is one of the highest-value storage features for commercial databases because it can lower I/O, reduce storage cost, and make historical data easier to operate at scale. The best result comes from matching the compression type to the workload instead of applying one option everywhere.
DbSchema supports that workflow by helping you inspect the schema, test the SQL, and keep compression decisions documented alongside the rest of the design.