Skip to main content
Use this flow when your customer data lives in Google Cloud Storage. Dari uses an Agent Host-managed Google service account for the target environment and mounts one session prefix into /workspace/customer.

What you share with Dari

  • provider = gcs
  • Bucket name
  • Base prefix, for example dari/acme-prod
  • Target environment: dev or prod
Example remote layout:
gs://customer-bucket/dari/acme-prod/sessions/sess_123/

1. Get the Dari service account

Use the service account email provided by Dari for your target environment, for example:
dari-storage-prod@agent-host-prod.iam.gserviceaccount.com

2. Grant access to your bucket scope

Preferred:
  • Grant access on a dedicated bucket used only for Dari, or
  • Grant access on a managed folder or other scoped storage boundary if you already use one
If you grant bucket-wide access, the bucket should be dedicated to Dari. Recommended role for read and write session storage:
  • roles/storage.objectUser
Bucket-level example:
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME \
  --member="serviceAccount:DARI_SERVICE_ACCOUNT_EMAIL" \
  --role="roles/storage.objectUser"
Managed-folder example:
gcloud storage managed-folders add-iam-policy-binding gs://BUCKET_NAME/BASE_PREFIX/ \
  --member="serviceAccount:DARI_SERVICE_ACCOUNT_EMAIL" \
  --role="roles/storage.objectUser"
If you want a read-only bucket-backed workspace, use roles/storage.objectViewer instead. If the bucket uses CMEK, also grant the same principal access to the relevant Cloud KMS key.

3. Register the connection with Dari

Share:
  • provider = gcs
  • Bucket name
  • Base prefix
  • Environment
Dari will derive one session prefix under that base prefix and mount it into /workspace/customer.

Notes

  • Keep dev and prod in separate prefixes.
  • Grant access to the smallest scope available in your cloud setup.
  • Do not use long-lived service account keys as the default production path.
Read Storage Overview for the shared storage model and Connect Storage on S3 for the AWS flow.