In this article, you'll learn how to configure your application to use object storage on MedStack Control. We demonstrate this using a simple flask application written in Python that's deployed and running in a cluster.
Prerequisite: Create a SAS token in MedStack Control
This Article covers:
- The Azure Blob SDK
- The sample application
- Configuring the service in MedStack Control
- Application Demo
- Application Source Code
The Azure Blob SDK
Azure provides an SDK called azure.storage.blob
that can be utilized by an application to manage blobs inside containers in the object store. The SDK supports the following blob interactions:
- List all blobs in the container
- Upload blobs to the container
- Delete blobs from the container
- Download blobs from the container
The sample application demonstrates using (1) a SAS token and (2) the ContainerClient class of the Azure Storage SDK to interact with blobs in the application layer.
1 – The sample application
The following section contains details about the sample application; its source code and how it uses the Azure Storage SDK.
The three sections herein include:
Code Snippets
The application layer can utilize some of the following SDK interactions to manage blobs using the ContainerClient and BlobClient SDK classes.
Get a container client using SAS token
from azure.storage.blob import ContainerClient
ContainerClient.from_container_url(self.sas_url)
Delete a blob from a container
blob_client = container_client.get_blob_client(blobname)
blob_client.delete_blob()
Download a blob from a container
blob_client = container_client.get_blob_client(blob_name)
with open(dest_file, "wb") as my_blob:
download_stream = blob_client.download_blob()
download_stream.readinto(my_blob)
Get a list of blobs in a container
container = self.get_container_client()
blobs_list_response = container.list_blobs()
blob_list = []
for blob in blobs_list_response:
blob_list.append(blob['name'])
Download a blob from a container
container_client = self.get_container_client()
with open(file, "rb") as data:
blob_client = container_client.upload_blob(name=name, data=data)
properties = blob_client.get_blob_properties()
Application Details
Python 3.8.9
azure-storage-blob == 12.8.1
Flask == 2.0.1
Dockerfile
FROM python:3-slim
RUN apt-get update
RUN apt-get install -y wget gnupg2 libpq-dev gcc net-tools vim
WORKDIR /usr/src/app
COPY . .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000/tcp
ENTRYPOINT ["python", "app.py"]
Flask Application
app.py
from flask import Flask, render_template, redirect, url_for, flash, request, send_file, send_from_directory from container import Container from werkzeug.utils import secure_filename from urllib import parse import os app = Flask(__name__) with open('sas_url', 'r') as file: sas_url = file.read() parsed_url = parse.urlsplit(sas_url) url = f"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}" query_string = parsed_url.query @app.route("/") def index(): container = Container(sas_url) blobs = container.get_list_of_blobs() print(blobs) # return f"{blobs}" return render_template('index.html', blobs=blobs, blobs_len=len(blobs), container_url=url, query_string=query_string) @app.route('/upload') def upload(): container = Container(sas_url) container.upload_blob_to_container("./test_file.txt") return redirect(url_for('index')) @app.route('/del/') def del_blob(name): container = Container(sas_url) container.delete_blob_from_container(name) return redirect(url_for('index')) @app.route('/download/') def download_blob(name): container = Container(sas_url) container.download_blob_from_container(name, name) return send_from_directory(directory='.', path='./', filename=name, as_attachment=True) @app.route('/del_all') def del_all(): container = Container(sas_url) container.delete_all_blobs() return redirect(url_for('index')) @app.route('/upload-file', methods=['POST']) def uploadFile(): if request.method == 'POST': file = request.files['file'] if file: filename = secure_filename(file.filename) file.save(filename) print(f"filename = {filename}") print(f"uploadfile = {file}") container = Container(sas_url) container.upload_blob_to_container(filename) os.remove(filename) return redirect(url_for('index')) if __name__ == '__main__': app.run(host="0.0.0.0", port=5000, debug=False)
container.py
import uuid, os from datetime import datetime from azure.storage.blob import ContainerClient from urllib import parse class Container(): def __init__(self, sas_url): self.sas_url = sas_url self.parsed_url = parse.urlsplit(self.sas_url) self.url = f"{self.parsed_url.scheme}://{self.parsed_url.netloc}{self.parsed_url.path}" self.query_string = self.parsed_url.query def get_container_client(self): return ContainerClient.from_container_url(self.sas_url) def delete_blob_from_container(self, blobname): container_client = self.get_container_client() blob_client = container_client.get_blob_client(blobname) blob_client.delete_blob() def delete_all_blobs(self): blobs = self.get_list_of_blobs() for blob in blobs: self.delete_blob_from_container(blob) def check_if_file_exists(self, name): blobs = self.get_list_of_blobs() if name not in blobs: return False return True def download_blob_from_container(self, blob_name, dest_file): container_client = self.get_container_client() blob_client = container_client.get_blob_client(blob_name) with open(dest_file, "wb") as my_blob: download_stream = blob_client.download_blob() download_stream.readinto(my_blob) def get_list_of_blobs(self): container = self.get_container_client() blobs_list_response = container.list_blobs() blob_list = [] for blob in blobs_list_response: blob_list.append(blob['name']) return blob_list def upload_blob_to_container(self, file): container_client = self.get_container_client() name = os.path.basename(file) if self.check_if_file_exists(name): split_name = name.split('.') now = datetime.now() split_name[0] = split_name[0] + '-' + now.strftime("%m-%d-%Y-%H-%M-%S") name = split_name[0] + '.' + split_name[-1] # name = str(uuid.uuid4()) with open(file, "rb") as data: blob_client = container_client.upload_blob(name=name, data=data) properties = blob_client.get_blob_properties() return properties
requirements.txt
autopep8==1.5.7 azure-core==1.18.0 azure-storage-blob==12.9.0 certifi==2021.5.30 cffi charset-normalizer==2.0.6 click==8.0.1 cryptography==3.4.8 Flask==2.0.1 idna==3.2 isodate==0.6.0 itsdangerous==2.0.1 Jinja2==3.0.1 MarkupSafe==2.0.1 msrest==0.6.21 oauthlib==3.1.1 pycodestyle==2.7.0 pycparser==2.20 requests==2.26.0 requests-oauthlib==1.3.0 six==1.16.0 toml==0.10.2 urllib3==1.26.7 Werkzeug==2.0.1
2 – Configuring the service in MedStack Control
Once you've built the application image on Docker and uploaded it to a container registry, you can pull the application down into a cluster on MedStack Control. We can configure the cluster to utilize the SAS token as a Docker Secret to keep it secure, and then configure the application service to mount the Secret to the container so it can be utilized at runtime.
The two sections herein include:
Creating a Secret
1) In the cluster, click "Manage Docker"
2) Naviagte to the "Secrets" tab and then click "New Secret"
3) Enter a name used to identify the Secret and then paste the SAS token URL into the data input box, then click "Create Secret"
You will see the Secret listed among all Docker Secrets in this cluster.
Mounting a Secret Inside a Container
1) In the cluster, click "Manage Docker"
2) Navigate to the "Services" tab and then click the "view" link to open the service details page
3) Click "Update" to edit the service configuration
4) Scroll to the "Secrets" sections and click "Add secret". In the "Name" dropdown list, select the Secret created for the SAS token. In the "Filename" input box, enter the full filename path to which you'd like to write the data of the Secret. In this example, at container runtime, the SAS token URL will be written to the file sas_url
located at /usr/src/app/
on the container's filesystem.
5) Click "Update" to deploy the latest application configuration changes.
3 – Application Demo
As with all applications deployed to clusters on MedStack Control, they are made available to the open internet once you have:
- Updated your DNS records for the domain(s) to include an A record pointing to the cluster manager node IP address
- Configured the application service to include the mapped domain(s) in the "Domain Mapping" section of the service configuration form.
Once the application has been successfully deployed, you will be able to see the different aspects of the Azure Storage SDK at work, being able to list, upload, and delete files to the object store.
4 - Application Source Code
You can download the source code for the sample application below: