How to interact with your application’s cloud media storage#

Your cloud application’s media file storage is provided as a service. The available storage depends on the Divio region your application uses. Most applications use Amazon Web Services’s S3 service, or another S3 provider. Others use Microsoft Azure blob storage.

Locally, your applications store their media in the /data/media directory, which you can interact with directly. You can use the Divio CLI to push and pull media to the cloud if required.

You can also interact directly with the cloud storage service using a suitable client if required, though this is rarely necessary.

Warning

Cloud file operations do not necessarily have the same behaviours you may be used to from other models. It’s important that you know what you are doing and understand the consequences of any actions or commands.

Direct access to cloud storage#

In each case, you need to obtain your storage credentials for an environment from its storage DSN variable, and use those with a suitable client.

Obtaining your cloud storage access credentials#

Use the Divio CLI to obtain the environment’s storage DSN from the DEFAULT_STORAGE_DSN environment variable. To obtain this value you must first specify the uuid of the most recent successful deployment.

divio app deployments get-var <deployment_uuid> DEFAULT_STORAGE_DSN

This value contains all the necessary information for accessing the storage bucket through a file transfer client.

Alternatively, you can ssh into the cloud container of the respective environment and retrieve the DEFAULT_STORAGE_DSN environment variable there.

For more information, see How to retrieve built-in environment variables.

Install the client#

AWS CLI documentation is Amazon’s official S3 client.

There are others clients suitable for connecting to S3 storage, including:

  • S3cmd, an alternative command-line utility

  • Transmit, a storage client for Macintosh

  • Cyberduck, a storage client for Macintosh and Windows

It’s beyond the scope of this documentation to discuss their usage. A brief example using the official AWS client is given here.

This section makes use of the MS Azure CLI, which you will need installed.

There are others clients suitable for connecting to an Azure blob storage, including:

Parse the storage DSN#

The two examples below show which sections of the DSN correspond to different parameters, for the hosts s3.amazonaws.com and sos.exo.io:

s3://AKAIIE7LUT6ODIJA:TZJYGCfUZheXG%2BwabbotgBs6d2lxZW06OIbD@example-test-68564d3f78d04c5f-8f20b19.aldryn-media.io.s3-eu-central-1.amazonaws.com/?domain=example-test-68564d3f78d04c5f-8f20b19.aldryn-media.io
           key                        secret                                       bucket name                          region     endpoint
s3://EXO52e55b187195d:iITF12F1tim9zBxITexrvL_bAghgK_z4w1hEuu@example-test-765482644ac540dbb23367cf3837580b-f0596a8.sos-ch-dk-2.exo.io/?auth=s3

The secret may contain some symbols encoded as hexadecimal values, and you will need to change them back before using them:

  • %2B must be changed to +

  • %2F must be changed to /

For any other values beginning with % use a conversion table.

The bucket name identifies the resource you wish to work with.

The region is contained in the endpoint, the S3 host name. It may be implicit, as in the case of Amazon’s default us-east-1:

Provider

Endpoint

Region

Location

Amazon

s3.amazonaws.com

us-east-1

US East (N. Virginia)

s3-eu-central-1.amazonaws.com

eu-central-1

EU (Frankfurt)

s3-eu-west-2.amazonaws.com

eu-west-2

EU (London)

Exoscale

sos-ch-dk-2.exo.io

ch-dk-2

Switzerland

See Amazon’s S3 regions table for more information about regions and their names.

The endpoint is the address that the client will need to connect to.

The examples below shows which sections of the DSN correspond to different parameters:

az://exampletest43b4705bdf:c2U9MjAzNi0wMS0y@@blob.core.windows.net/public-media
         account name       encoded token          host name      container name

Note down the parameters ready for use.

The encoded token needs to be decoded from Base64 format. Depending on the token, you will need to add additional padding in the form of one or multiple = to the end of the string. The decoded token will look something like:

se=2036-01-22T08%3A56%3A16Z&sp=rwdlc&sv=2018-11-09&ss=b&srt=co&sig=ahD3gmIxymeattHsQ4mePWE5DFUol%2BW6byQt5EZ0H/U%3D

Your media container is always named public-media by default.

Some tools need a SAS-URI to make a connection to a blob storage. You can build this URI with the following format:

https://<account name>.blob.core.windows.net/<container name>?<decoded_token>

Using the client#

Run:

aws configure

You will be prompted for some of the storage access parameters:

  • AWS Access Key ID - key

  • AWS Secret Access Key - secret key

  • Default region name - storage region

The aws configure command stores the configuration in ~/.aws.

Run aws s3 followed by options, commands and parameters. For example, to list the contents of a bucket:

➜ aws s3 ls example-test-68564d3f78d0935f-8f20b19.aldryn-media.io
       PRE filer_public/
       PRE filer_public_thumbnails/

Or, to copy (cp) a file from your own computer to S3:

➜ aws s3 cp example.png s3://example-test-68564d3f78d04c5f-8f20b19.aldryn-media.io/example.png
upload: ./example.png to s3://example-test-68564d3f78d04c5f-8f20b19.aldryn-media.io/example.png

Using AWS CLI with other providers

For non-AWS providers, such as Exoscale, you will need to add the --url-endpoint option to the command, as the AWS CLI assumes an endpoint on .amazonaws.com/. For the Exoscale example above, you would use:

aws s3 --endpoint-url=https://sos-ch-dk-2.exo.io [...]

Note that the scheme (typically https://) must be included.

Use the parameters with the Azure CLI, for example:

az storage blob list --container-name public-media --account-name exampletest43b4705bdf --sas-token "se=2036-01-22T08%3A56%3A16Z&sp=rwdlc&sv=2018-11-09&ss=b&srt=co&sig=ahD3gmIxymeattHsQ4mePWE5DFUol%2BW6byQt5EZ0H/U%3D"

Use the Divio CLI for local access to Cloud storage#

The application’s media files can be found in the /data/media directory, and can be managed and manipulated in the normal way on your own computer.

Be aware that if you edit application files locally, your operating system may save some hidden files. When you push your media to the cloud, these hidden files will be pushed too. This will however not usually present a problem.

Pushing and pulling media files#

The Divio CLI includes pull and push commands that target the test or live server as required.

Warning

Note that all push and pull operations completely replace all files at the destination, and do not perform any merges of assets. Locally, the /data/media directory will be deleted and replaced; on the cloud, the entire bucket will be replaced.

Limitations#

You may encounter some file transfer size limitations when pushing and pulling media using the Divio CLI. Interacting directly with the storage service, as described above, is a way around this.

It can also be much faster, and allows selective changes to files in the system.

Configuring S3 buckets#

Storage ACLs (Access Control Lists)#

When uploading files to your storage, you may need to specify the ACLs explicitly - in effect, the file permissions - on the files. If you don’t set the correct ACLs, you may find that attempts to retrieve them (for example in a web browser) give an “access denied” error.

On AWS S3, the public-read ACL needs to be set (by default it’s private). This is the ACL required for general use.

For example, you can use --acl public-read as a flag for operations such as cp, or aws s3api put-object-acl and aws s3api get-object-acl to set set and get ACLs on existing objects.

Enable CORS#

CORS (cross-origin resource sharing) is a mechanism that allows resources on one domain to be served when requested by a page on another domain.

These requests are blocked by default by S3 media storage; when a request is blocked, you’ll see an error reported in the browser console:

Access to XMLHttpRequest at 'https://example.divio-media.com/images/image.jpg' from origin
'https://example.us.aldryn.io' has been blocked by CORS policy: No
'Access-Control-Allow-Origin' header is present on the requested resource.

In order to resolve this, the storage bucket needs to be configured to allow requests from a different origin.

This can be done using the AWS CLI’s S3 API tool (see the notes on how to use the client, above).

Warning

You will likely receive a GetBucketCors operation: Access Denied error when attempting to use the S3 API with buckets on applications created before 10th February 2020. If this occurs, but other operations such as aws s3 ls work as expected, then your bucket will need to be updated. Please contact Divio support so that we can do this for you.

Now you can check for any existing CORS configuration:

aws s3api get-bucket-cors --bucket <bucket-name>

You will receive a The CORS configuration does not exist error if one is not yet present.

A CORS configuration is specified in JSON. It’s beyond the scope of this documentation to outline how your bucket should be configured for CORS; see AWS’s own Configuring and using cross-origin resource sharing documentation for more.

However an example that allows GET and HEAD requests from any origin would be:

{
   "CORSRules": [
       {
           "AllowedHeaders": ["*"],
           "AllowedMethods": ["GET", "HEAD"],
           "AllowedOrigins": ["*"],
           "MaxAgeSeconds": 3000
       }
   ]
}

Save your configuration as a file (cors.json) and use the API to upload it to the bucket:

aws s3api put-bucket-cors --bucket <bucket-name> --cors-configuration file://cors.json

See the AWS S3 CLI API documentation for further information about available operations.