When you take your first steps into the world of Cloud application deployment, you won’t get far without hearing the name “Kubernetes”.
Kubernetes is an open source, widely supported, rapidly growing platform for managing containerized workloads and services. Containers are virtualized run-time environments that make it easy to share CPU, memory, storage, and network resources at the operating systems level. They offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. Kubernetes automates the manual processes involved in deploying, managing, and scaling containers.
Azure Kubernetes Service (AKS) brings the Microsoft Azure cloud platform together with Kubernetes’ orchestration technology to offer deployment, scaling, and management of Docker containers and container-based applications. It also offers provisioning, scaling, and upgrading of resources based on client demand or requirements, all without any Kubernetes cluster downtime. Docker creates the containers that hold application code and dependencies, and Kubernetes operates and scales groups of those containers to run applications in production.
One of the services that can integrate with AKS is the Azure Container Registry (ACR), a registry that holds Docker images. Images are immutable files that contain source code, libraries, dependencies, tools, and the other files needed to run an application. Containers use images to run applications, and the Azure Container Registry is built to receive and store images to run in containerized environments. ACR is a common part of container development workflow and can integrate with continuous integration and delivery tools like Azure Pipelines.
In this blog post, we will show an example of how you can package an application using Azure Kubernetes Service, push that application as a Docker image through an Azure Pipeline to the Azure Container Registry, and finally pull the Docker image back out of the Azure Container Registry so that you can run it on your machine and access it via your web browser.
Automation of this process would be expected in industry, per the Azure Pipeline’s definition as a medium for continuous integration and delivery. But in order to automate it, we must understand it step by step.
Before you begin, make sure that you have the following:
- Access to Azure DevOps with a Github or Microsoft account
- A Microsoft Azure account
- Docker locally installed
- Azure CLI locally installed
Packaging and Pushing Your Application
First, you’ll need an application to package. In this case, we will use a simple Java “Hello World” Rest Controller. This will be stored in a Spring Boot Maven project, which you can generate from https://start.spring.io. Generate a project there with the Spring Web and Spring Boot Actuator dependencies on Java 11, and then download the .zip file it gives you.
You can store this SimpleController.java file in your downloaded Spring Boot project files under the path spring-boot-example/src/main/java/com/example/azure/devops, and then in a new folder titled “controller”.
Create a project in your Azure DevOps workspace to store your application code in, and then use either the Git HTTPS or Git SSH to clone that project repository to your local machine.
Next, navigate to where you cloned that repository onto your local machine and move your application files into that folder. Once you are in that folder in your Azure CLI and the application files have been dropped into it, we can begin pushing.
To push the files to the Azure DevOps project, type “git add .”, which adds all files in the current directory to a commit for pushing into a Git repository, and then “git commit -m ‘Initial project files’”, which stages your commit for pushing with the message “Initial project files”, and finally “git push”, which sends your project files to the online Git repository.
If everything was successful, you should see something like this:
Our application code and environment is now pushed onto an Azure DevOps repository, which means we can now move onto the next step of containerization.
Moving your Application through an Azure Pipeline into Azure Container Registry
In order to send our application code which is now stored on an Azure DevOps repository into the Azure Container Registry, we’re going to need an Azure Pipeline.
Go to Pipelines on the left pane and click “Create Pipeline”. Specify that your code is in Azure Repos Git and then choose the repository that we made earlier, using “Maven” to configure the pipeline. Click “Save and run”, and an Azure pipeline definition will be created in the repository you selected under a azure-pipelines.yml file.
After the pipeline has finished running and the tests have passed, we need to insert a process in the pipeline telling it to build a Docker image that we can send into the Azure Container Registry. To do this, we need to add some code to the azure-pipelines.yml file.
Go to the bottom of the azure-pipelines.yml file, which can be found in your Azure Repo, and append this to it:
Commit the changes to the pipeline and a new run of the pipeline will automatically start, including this new Build Docker Image step. If everything was successful, you should see something like this:
You won’t see the “Push Docker image” step right now, but you should see that the Build Docker image step was successful. Next, we’ll set up the Push Docker image step. In order to do that, we need to connect this Azure Repo to the Azure Container Registry.
In order to create use Azure Container Registry, log into the Azure Portal and search for the Container Registry service. Create a new resource group and container registry and name it whatever you like.
Now that we’ve created the container registry, we need to instruct the pipeline to push the Docker image after it finishes building it.
We’ll be adding this build code to the pom.xml file within the Azure Repo project:
Change the <name> tag’s “yourcontainerregistry” portion to match the container name you created in the Azure Container Registry. In our example, we match it with “mahaboobscontainerregistry.azurecr.io”.
Go back to the azure-pipelines.yml file. On the right pane, there is a task for “Docker”, which lets you build or push Docker images. If you click it, it will notify you there is no attached container registry to send your application image to. We must create this connection now.
Go to the Azure DevOps Project settings and click on “Service Connection”. Create a service connection of type “Azure Container Registry”, and select the container registry that you created earlier. Click save, and the container registry and your pipeline should be attached.
Go back to the azure-pipelines.yml file and click on the Docker task once more. You will now see your container registry as eligible to be added. Select your container registry, and select the command for this task as “push”, with the tag “latest”. This will signify the image as the “latest” image to be pushed.
Once you commit your changes, and the pipeline re-runs, you should see the step reflected in the job:
You should also now see that the container image is available for viewing in your newly attached container registry:
Pulling from the Azure Container Registry to your Local Machine
Now that we have the Docker image available in our container registry, we can run the application using Docker on our local machine. This is where you’ll need to make sure you have Azure CLI installed, as well as Docker.
Open the command prompt and log in to Azure Portal using “az login” with your credentials.
Then, log in to your container registry using “az acr login –name name_of_your_container_registry”, where name_of_your_container_registry is the name of your container registry.
You can list the available images by using the “docker images” command.
Next, pull your image using “docker pull name_of_your_container_registry.azurecr.io/spring-boot-example:latest”, replacing the “name_of_your_container_registry” part with the name of your container registry. Notice the “latest” at the end? That’s the tag identifier we put on our image in the pipeline step coming back to help us acquire the correct image.
Running your Application Image
Finally, you can run your image using “docker run -p 8080:8080 IMAGE_ID”, replacing IMAGE_ID with the image ID that you see after you run “docker images”. This opens the image on localhost:8080, which you can reach via your web browser.
Type “localhost:8080” into your web browser, and if everything is successful, you should see your application (in our case, a Hello World program):
To package and deploy applications as containers using Microsoft Azure, you need to
- create an Azure repository,
- clone it to a local machine via Git,
- place your application into the local repository,
- push the repository onto Azure,
- create an Azure pipeline,
- connect the Azure pipeline to your Azure repository,
- add a task to the Azure pipeline to build a Docker image from your repository contents,
- connect an Azure Container Registry to your Azure pipeline,
- add a task to the Azure pipeline to push built Docker images into the ACR
- use Azure CLI to log into your ACR,
- pull the Docker image using Docker commands in any CLI,
- finally, run your application using Docker commands.
DevOps and Agile development cite CI/CD as a best practice, and the technology we used in this blog post shows an introduction to the tools used in accomplishing those practices in your own workplace environment.
If you need further assistance, Get Your Free Consultation with us.
About the Authors:
Mahaboob Khan is a Cloud Engineer at ISmile Technologies, he had extensive experience working on Microsoft Azure which involves activities like Implementation, Managing and troubleshooting the User related issues. With automation tools like Azure ARM Template, Terraform, and Azure DevOps he helps our client to automate deployment of IaaS and PaaS services
Gabriel Chutuape – A technology enthusiast passionate about automation, Gabriel Chutuape is a Cloud Engineer at ISmile Technologies. He’s part of the ISmile Technologies Cloud enablement team that help customers to design/solution/project engineering, integrating and implementing infrastructure technologies & services.
A technology enthusiast passionate about automation, Gabriel Chutuape is a Cloud Engineer at ISmile Technologies. He’s part of the ISmile Technologies Cloud enablement team that help customers to design/solution/project engineering, integrating and implementing infrastructure technologies & services.
A Cloud Engineer at ISmile Technologies, he had extensive experience working on Microsoft Azure which involves activities like Implementation, Managing and troubleshooting the User related issues. With automation tools like Azure ARM Template, Terraform, and Azure DevOps, he helps our client to automate deployment of IaaS and PaaS services.