|
| 1 | +--- |
| 2 | +title: Change storage tier for seismic datasets in Azure Data Manager for Energy |
| 3 | +description: "Learn how to change the storage tier of seismic datasets in Azure Data Manager for Energy to optimize storage costs using Hot, Cool, and Cold tiers." |
| 4 | +author: bharathim |
| 5 | +ms.author: bselvaraj |
| 6 | +ms.service: azure-data-manager-energy |
| 7 | +ms.topic: tutorial |
| 8 | +ms.date: 03/10/2026 |
| 9 | + |
| 10 | +#Customer intent: As a data manager, I want to change the storage tier of my seismic datasets so that I can optimize storage costs based on data access frequency. |
| 11 | + |
| 12 | +--- |
| 13 | + |
| 14 | +# Tutorial: Change the storage tier of seismic datasets |
| 15 | + |
| 16 | +> [!IMPORTANT] |
| 17 | +> This feature is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. To enable this feature, please raise a support request. See [How do I raise a support request for Azure Data Manager for Energy?](faq-energy-data-services.yml#how-do-i-raise-a-support-request-for-azure-data-manager-for-energy). |
| 18 | +
|
| 19 | +Use the Seismic DDMS Change Tier operation in Azure Data Manager for Energy to move datasets between **Hot**, **Cool**, and **Cold** storage tiers based on access frequency. Moving rarely accessed data to cooler tiers reduces storage costs, while keeping active datasets in the Hot tier ensures optimal performance. This operation is especially valuable for seismic data management, where large volumes of historical data must remain available for future analysis or compliance but don't require frequent access. |
| 20 | + |
| 21 | +In this tutorial, you'll learn how to: |
| 22 | + |
| 23 | +> [!div class="checklist"] |
| 24 | +> |
| 25 | +> * Initiate a change tier operation for a dataset or path |
| 26 | +> * Monitor the change tier operation status |
| 27 | +> * Retrieve failure details for failed datasets |
| 28 | +
|
| 29 | +## Understand storage tiers |
| 30 | + |
| 31 | +Seismic DDMS supports the following storage tiers, which map to the underlying cloud provider's storage classes: |
| 32 | + |
| 33 | +| Tier | Access frequency | Access latency | Storage cost | Use case | |
| 34 | +| --- | --- | --- | --- | --- | |
| 35 | +| **Hot** | Frequently accessed | Milliseconds | Highest | Active projects, recent acquisitions | |
| 36 | +| **Cool** | Infrequently accessed (30+ days) | Milliseconds | Lower | Completed projects, periodic reprocessing | |
| 37 | +| **Cold** | Rarely accessed (90+ days) | Milliseconds to hours | Lowest | Long-term storage, regulatory compliance | |
| 38 | + |
| 39 | +Each tier has a minimum retention period. Moving data out of a tier before the minimum retention period elapses might incur early deletion charges. |
| 40 | + |
| 41 | +## Prerequisites |
| 42 | + |
| 43 | +Before you begin, make sure you meet the following prerequisites: |
| 44 | + |
| 45 | +- An Azure Data Manager for Energy resource with Seismic DDMS configured. |
| 46 | +- A registered `tenant` and `subproject` in the Seismic DDMS service. |
| 47 | +- The `subproject.admin` role assigned to your user account. |
| 48 | +- A bearer token for API authentication. See [How to generate auth token](how-to-generate-auth-token.md). |
| 49 | +- At least one dataset registered in the target subproject. |
| 50 | + |
| 51 | +## Initiate a change tier operation |
| 52 | + |
| 53 | +Before you submit the request, pause all write and delete operations on the target path. Adding or deleting datasets while a change tier operation is in progress can lead to inconsistent results. |
| 54 | + |
| 55 | +1. Submit a PUT request with the path and target tier. Use a trailing `/` for directory paths (for example, `sd://tenant/subproject/a/b/c/`) and no trailing slash for a single dataset (for example, `sd://tenant/subproject/a/b/c/dataset-name`): |
| 56 | + |
| 57 | + - All datasets in a path: |
| 58 | + |
| 59 | + ```http |
| 60 | + PUT <instance>.energy.azure.com/seistore-svc/api/v3/operation/change-tier?path=sd://{tenant}/{subproject}/{path}/&tier=Cool |
| 61 | + Authorization: Bearer {access_token} |
| 62 | + Content-Type: application/json |
| 63 | + ``` |
| 64 | +
|
| 65 | + - Single dataset: |
| 66 | +
|
| 67 | + ```http |
| 68 | + PUT <instance>.energy.azure.com/seistore-svc/api/v3/operation/change-tier?path=sd://{tenant}/{subproject}/{path}/{dataset_name}&tier=Cool |
| 69 | + Authorization: Bearer {access_token} |
| 70 | + Content-Type: application/json |
| 71 | + ``` |
| 72 | +
|
| 73 | +1. Save the `operation_id` from the `202 Accepted` response. You need it to monitor the operation. |
| 74 | +
|
| 75 | + ```json |
| 76 | + { |
| 77 | + "operation_id": "c3d282e6-e7d1-40d8-8ac2-edc15b6d174c" |
| 78 | + } |
| 79 | + ``` |
| 80 | + |
| 81 | +## Monitor the operation status |
| 82 | + |
| 83 | +After you initiate the change tier operation, poll the status endpoint to track progress. |
| 84 | + |
| 85 | +1. Poll the status endpoint with the `operation_id` until `status` is `Completed` or `Failed`: |
| 86 | + |
| 87 | + ```http |
| 88 | + GET <instance>.energy.azure.com/seistore-svc/api/v3/operation/change-tier/{operation_id} |
| 89 | + Authorization: Bearer {access_token} |
| 90 | + data-partition-id: {data_partition_id} |
| 91 | + ``` |
| 92 | + |
| 93 | +1. Check the `status` field in the response. While the operation is running: |
| 94 | + |
| 95 | + ```json |
| 96 | + { |
| 97 | + "operation_id": "c3d282e6-e7d1-40d8-8ac2-edc15b6d174c", |
| 98 | + "created_at": "2026-03-10T06:15:00Z", |
| 99 | + "created_by": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", |
| 100 | + "last_updated_at": "2026-03-10T06:17:30Z", |
| 101 | + "status": "Running", |
| 102 | + "dataset_cnt": 500, |
| 103 | + "completed_cnt": 342, |
| 104 | + "failed_cnt": 0, |
| 105 | + "target_tier": "Cool" |
| 106 | + } |
| 107 | + ``` |
| 108 | + |
| 109 | + When the operation finishes, `status` changes to `Completed`. Check `failed_cnt` for partial failures: |
| 110 | + |
| 111 | + ```json |
| 112 | + { |
| 113 | + "operation_id": "c3d282e6-e7d1-40d8-8ac2-edc15b6d174c", |
| 114 | + "created_at": "2026-03-10T06:15:00Z", |
| 115 | + "created_by": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", |
| 116 | + "last_updated_at": "2026-03-10T06:25:00Z", |
| 117 | + "status": "Completed", |
| 118 | + "dataset_cnt": 500, |
| 119 | + "completed_cnt": 497, |
| 120 | + "failed_cnt": 3, |
| 121 | + "target_tier": "Cool" |
| 122 | + } |
| 123 | + ``` |
| 124 | + |
| 125 | +## Retrieve failure details |
| 126 | + |
| 127 | +Use the `show_details=true` parameter to get per-dataset error information for any datasets that fail during the tier change. |
| 128 | + |
| 129 | +1. Add `show_details=true` to the status request: |
| 130 | + |
| 131 | + ```http |
| 132 | + GET <instance>.energy.azure.com/seistore-svc/api/v3/operation/change-tier/{operation_id}?show_details=true&limit=100 |
| 133 | + Authorization: Bearer {access_token} |
| 134 | + data-partition-id: {data_partition_id} |
| 135 | + ``` |
| 136 | + |
| 137 | + The following query parameters control the response: |
| 138 | + |
| 139 | + | Parameter | Required | Type | Description | |
| 140 | + | --------- | -------- | ---- | ----------- | |
| 141 | + | `show_details` | No | boolean | Set to `true` to include the `failed_datasets` array in the response. Default: `false`. | |
| 142 | + | `limit` | No | integer (1–1000) | Maximum number of failed datasets to return per page. Default: `100`. Only applicable when `show_details=true`. | |
| 143 | + | `cursor` | No | string | Base64 URL-safe-encoded cursor from a previous response to retrieve the next page of failures. | |
| 144 | + |
| 145 | +1. Review the `failed_datasets` array in the response: |
| 146 | + |
| 147 | + ```json |
| 148 | + { |
| 149 | + "operation_id": "c3d282e6-e7d1-40d8-8ac2-edc15b6d174c", |
| 150 | + "created_at": "2026-03-10T08:00:00Z", |
| 151 | + "created_by": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", |
| 152 | + "last_updated_at": "2026-03-10T08:45:00Z", |
| 153 | + "status": "CompletedWithErrors", |
| 154 | + "dataset_cnt": 2000, |
| 155 | + "completed_cnt": 1994, |
| 156 | + "failed_cnt": 6, |
| 157 | + "target_tier": "Cold", |
| 158 | + "failed_datasets": [ |
| 159 | + { |
| 160 | + "sdpath": "sd://opendes/project-alpha/seismic/survey-2024-001", |
| 161 | + "error": "Failed to change tier for 12 blob(s)" |
| 162 | + }, |
| 163 | + { |
| 164 | + "sdpath": "sd://opendes/project-alpha/seismic/survey-2024-002", |
| 165 | + "error": "Access denied: user is not authorized to modify this dataset (ACL validation failed)" |
| 166 | + }, |
| 167 | + { |
| 168 | + "sdpath": "sd://opendes/project-alpha/seismic/survey-2024-003", |
| 169 | + "error": "Failed to acquire lock" |
| 170 | + }, |
| 171 | + { |
| 172 | + "sdpath": "sd://opendes/project-alpha/seismic/survey-2024-004", |
| 173 | + "error": "Dataset has no associated storage location" |
| 174 | + }, |
| 175 | + { |
| 176 | + "sdpath": "sd://opendes/project-alpha/seismic/survey-2024-005", |
| 177 | + "error": "Dataset storage location has invalid format" |
| 178 | + }, |
| 179 | + { |
| 180 | + "sdpath": "sd://opendes/project-alpha/seismic/survey-2024-006", |
| 181 | + "error": "Tier changed but metadata update failed after retries" |
| 182 | + } |
| 183 | + ], |
| 184 | + "cursor": "ZXlKamIyNTBhVzUxWVhScGIyNVViMnRsYmlJNkltVjRZVzF3YkdVaWZRPT0" |
| 185 | + } |
| 186 | + ``` |
| 187 | + |
| 188 | + If the response includes a `cursor` value, pass it in the next request to retrieve the next page of failures. |
| 189 | + |
| 190 | +## Storage tier retention policies |
| 191 | + |
| 192 | +Each storage tier enforces a minimum retention period. If you move data from a cooler tier to a warmer tier before the retention period expires, early deletion fees might apply. |
| 193 | + |
| 194 | +| Tier | Minimum retention period | |
| 195 | +| ---- | ------------------------ | |
| 196 | +| **Hot** | None | |
| 197 | +| **Cool** | 30 days | |
| 198 | +| **Cold** | 90 days | |
| 199 | + |
| 200 | +Follow these practices when you manage storage tier changes: |
| 201 | + |
| 202 | +- **Audit before changing tiers**—Use the dataset list API to identify which datasets are candidates for tier changes before initiating a bulk operation. |
| 203 | +- **Respect retention periods**—Moving data out of Cool or Cold tiers before the minimum retention period incurs early deletion charges. |
| 204 | +- **Monitor operations to completion**—Always poll the operation status until `status` is `Completed` or `Failed`. Don't assume success after the `202 Accepted` response. |
| 205 | +- **Handle failures gracefully**—Use `show_details=true` to retrieve per-dataset error information and address root causes (permissions, missing blobs, retention violations) before retrying. |
| 206 | +- **Plan for access latency changes**—Datasets in Cool and Cold tiers might have higher first-byte latency. Ensure downstream consumers are aware of potential latency increases. |
| 207 | + |
| 208 | +## Clean up resources |
| 209 | + |
| 210 | +There are no billable Azure resources created in this tutorial. If you changed storage tiers for testing purposes, restore them by running another change tier operation. |
| 211 | + |
| 212 | +## Related content |
| 213 | + |
| 214 | +- [Tutorial: Work with seismic data by using Seismic DDMS APIs](tutorial-seismic-ddms.md) |
| 215 | +- [Azure Blob Storage access tiers overview](../storage/blobs/access-tiers-overview.md) |
| 216 | +- [Azure Blob Storage lifecycle management policies](../storage/blobs/lifecycle-management-overview.md) |
| 217 | +- [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) |
| 218 | +- [Seismic DDMS API reference](https://microsoft.github.io/adme-samples/) |
| 219 | + |
0 commit comments