― Cloud Migration from AWS to Azure

Moving an Odoo application from AWS to Azure which involved the following services:

1. EC2 to Azure Virtual Machines (VMs)

  • Choose the right VM type and size based on the application's requirements.

  • Configure the VM's network settings, including subnets, security groups, and public IP addresses.

  • Migrate the application's data and files to the Azure VM.

  • Test the application to ensure it is functioning properly on the Azure VM.

2. RDS PostgreSQL to Azure Database for PostgreSQL

  • Create an Azure Database for PostgreSQL instance and configure the appropriate settings, such as the database name, username, and password.

  • Migrate the data from the RDS PostgreSQL database to the Azure Database for PostgreSQL instance.

  • Test the application to ensure it is functioning properly with the Azure Database for PostgreSQL instance.

3. Networking and Connectivity

  • Configure the network settings to allow communication between the Azure VM and the Azure Database for PostgreSQL instance.

  • Ensure that the application can access any external resources it requires, such as other databases or APIs.

4. Security

  • Implement appropriate security measures to protect the application and data, such as firewalls, encryption, and access control.

5. Monitoring and Logging

  • Set up monitoring and logging to track the performance and health of the application and the Azure resources it uses.

6. Cost Optimization

  • Review the pricing for Azure VMs and Azure Database for PostgreSQL to ensure you are using the most cost-effective options for your application.

7. High Availability and Disaster Recovery

  • Implement high availability and disaster recovery strategies to ensure the application is always available and can recover from failures.

8. Performance Tuning

  • Monitor the application's performance and make adjustments to the Azure VM and Azure Database for PostgreSQL instance as needed to optimize performance.

9. Compliance and Regulations

  • Ensure that the application and the Azure resources it uses comply with any relevant compliance and regulatory requirements.

10. Continuous Improvement

  • Continuously monitor the application and the Azure resources it uses to identify areas for improvement and make necessary changes.

― Building and deploying Infrastructure in azure PART 1

Building and Deploying Resources in Azure using Terraform, Octopus Deploy, Umbraco (Content Management System) and learning platform site.

1. Setting up Terraform

  • Install Terraform and configure it with the Azure provider.

  • Create a Terraform configuration file (.tf) that defines the Azure resources you want to create, such as web apps, functions, storage accounts, azure sql databases, cache and virtual networks.

2. Creating Azure Resources with Terraform

  • Run the Terraform command to create the Azure resources defined in your configuration file.

  • Terraform will provision the resources in Azure and output the state of the resources in a state file (.tfstate).

3. Setting up Octopus Deploy

  • Create an Octopus project

4. Deploying Resources with Octopus Deploy

  • Create a deployment process in Octopus Deploy to deploy the Azure resources web apps and functions.

5. Deploying Umbraco with Octopus Deploy

  • Add the Umbraco and learning platform website files to the Octopus Deploy project.

  • Create a deployment process in Octopus Deploy that deploys the Umbraco and learning platform websites files to the web apps.

7. Testing and Monitoring

  • Test the Umbraco website to ensure it is functioning properly.

  • Monitor the Azure resources and the Umbraco and learning platform websites to ensure they are healthy and available.

8. Continuous Integration and Continuous Deployment (CI/CD)

  • Set up a CI/CD pipeline to automatically build and deploy the Terraform configuration, Octopus Deploy project, and Umbraco and learning platform websites when changes are made.

9. Security and Compliance

  • Implement appropriate security measures to protect the Azure resources, the Umbraco and learning platform websites.

  • Ensure that the Azure resources, the Umbraco and learning platform websites comply with any relevant compliance and regulatory requirements.

10. Maintenance and Updates

  • Regularly maintain and update the Azure resources, Octopus Deploy project, and Umbraco and learning platform websites to ensure they are up-to-date and secure.

― Implementing Microsoft Azure active directory (Entra ID) authentication to azure sql databases

Problem:

We had numerous Azure SQL databases within our tenant, and each time we onboarded a new client, we needed to create additional Azure SQL databases to maintain separate data for each client as per our contractual obligations.

Every time we created an Azure SQL database, we had to run a script to add our data team members with their individual usernames and passwords. This process became increasingly challenging as the number of Azure SQL databases grew, making it difficult for the data team to remember their passwords due to the sheer volume of databases they had to access.

Solution:

To address this issue, we implemented Azure Active Directory (Entra ID) Authentication on all existing Azure SQL databases and configured it to be automatically enabled for any new databases created in the future. This eliminated the need for the data team to remember multiple passwords, as they could now use their tenant Active Directory (Entra ID) credentials to log into the Azure SQL databases.

This solution proved to be a simple yet highly effective measure that significantly simplified the process of creating and managing Azure SQL databases for the cloud team and eliminated the password management burden for the data team.

As an additional security measure, we created a security group that included the data team members and assigned the necessary role to the Azure SQL databases during the resource creation phase. This ensured that only authorized individuals had access to the databases, further enhancing the security of our data.

― Building and deploying Infrastructure in azure PART 2

Building and Deploying Resources in Azure using Terraform, Azure DevOps (Ticket Management and Pipelines), Umbraco (Content Management System), and Learning Platform Site

1. Setting up Terraform

  • Install Terraform and configure it with the Azure provider.

  • Create a Terraform configuration file (.tf) that defines the Azure resources you want to create, such as web apps, functions, storage accounts, azure sql databases, cache and virtual networks.

2. Creating Azure Resources with Terraform

  • Run the Terraform command to create the Azure resources defined in your configuration file.

  • Terraform will provision the resources in Azure and output the state of the resources in a state file (.tfstate).

3. Setting up Azure DevOps (Ticket Management and Pipelines)

  • Create an Azure DevOps organization and project.

  • Set up a repository for your Terraform configuration files and Umbraco website files.

  • Create a pipeline in Azure DevOps that uses Terraform to create the Azure resources.

  • Create a pipeline in Azure DevOps that deploys the Umbraco website files to the Azure web apps.

4. Setting up Umbraco (Content Management System)

  • Install Umbraco on the Azure web apps created by Terraform.

  • Configure Umbraco and add the necessary content and functionality.

5. Setting up Learning Platform Site

  • Install the learning platform site on the Azure virtual machine created by Terraform.

  • Configure the learning platform site and add the necessary content and functionality.

6. Testing and Monitoring

  • Test the Umbraco website and the learning platform site to ensure they are functioning properly.

  • Monitor the Azure resources and the Umbraco website and the learning platform site to ensure they are healthy and available.

7. Continuous Integration and Continuous Deployment (CI/CD)

  • Set up a CI/CD pipeline to automatically build and deploy the Terraform configuration using Ansible, Azure DevOps pipelines, Umbraco website, and learning platform site when changes are made.

8. Security and Compliance

  • Implement appropriate security measures to protect the Azure resources, the Umbraco website, and the learning platform site.

  • Ensure that the Azure resources, the Umbraco website, and the learning platform site comply with any relevant compliance and regulatory requirements.

9. Maintenance and Updates

  • Regularly maintain and update the Azure resources, the Umbraco website, and the learning platform site to ensure they are up-to-date and secure.

― Decommission of resources in Aws and azure

This task demanded meticulous planning and extensive documentation before any resources or data could be deleted.

It entailed coordinating with various stakeholders to ensure that the company's requirements were met, particularly concerning client data.

Thorough documentation was essential, requiring review and approval from multiple stakeholders before proceeding with the tasks.

The decommissioning process was driven by cost-saving measures, as the data in question had exceeded the contracted retention period.

Key skills demonstrated in this task:

  • Effective communication with diverse stakeholders, both technical and non-technical.

  • Ability to create detailed documentation with accurate information, including data types, retention periods, and client ownership.

― Azure sql databases integration with vnet

As part of our security enhancements, we undertook a project to integrate all our Azure SQL Databases with a virtual network (VNet).

This integration allowed us to disable the firewall to the public network and restrict the Azure SQL Databases from communicating with any Microsoft resources. Consequently, the databases could only communicate with resources within the integrated VNet.

Given the substantial number of Azure SQL Databases running live in production, this endeavor required meticulous planning and coordination. We presented our proposal in production change meetings to obtain the necessary approvals. The implementation was carried out on a client-by-client basis, and we scheduled the changes for after business hours to minimize disruption.

To ensure seamless functionality, we engaged our testing team to conduct thorough end-to-end testing. This testing verified that the Azure web apps continued to communicate effectively with the Azure SQL Databases after the VNet integration.

― SSO Authentication to the site using rbac roles

?????

― Leading penetration testing

Served as the primary point of contact from the Cloud team during penetration testing, providing valuable information about each of our services and granting the testers least privilege access to conduct their assessments.

Upon receiving the test results, I assumed responsibility for resolving high and medium-priority findings before the final sign-off. This entailed effective communication with various stakeholders, including developers, the data team, and management, to develop plans and timelines for addressing the issues. In certain cases, we provided explanations and justifications for accepting residual risks due to technical limitations or the presence of alternative safeguards that the pen testers may not have been aware of.

This experience not only deepened my understanding of services beyond my immediate area of expertise but also honed my communication skills and allowed me to acquire knowledge from other teams outside of CloudOps.

― Leading identity Access Management (IAM) workstream

This workstream encompassed the following tasks:

  • Restructuring the management groups in Azure by categorizing them into non-production and production environments.

  • Establishing security groups with appropriate access levels for each team.

  • Implementing a breakglass process using Microsoft Entra Privileged Identity Management (PIM), requiring users to provide justification, such as ticket numbers, duration of access, and reasons for requesting access, before gaining temporary access to production resources. This information is then sent to approvers for review and approval.

These measures enhanced user access management efficiency by simplifying the addition and removal of joiners and leavers from security groups and ensured that teams have the necessary permissions to perform their daily tasks. Additionally, the introduction of PIM ensures that production access is granted only when necessary, for a specified duration, and is subject to audit trails, as users are required to provide detailed information justifying their access requests.

― Setting up Azure datafactory infrastructure

Setting Up Azure Data Factory Infrastructure with Linked Services, Self-Hosted Runtime, Private Endpoints, and Release Pipeline in Azure DevOps

1. Create an Azure Data Factory

  • In the Azure portal, navigate to the Azure Data Factory service.

  • Click on Create data factory.

  • Enter a name for the data factory and select the region.

  • Click on Create.

2. Create Linked Services

  • In the Azure Data Factory, click on Linked services.

  • Click on New linked service.

  • Select the data source that you want to connect to (e.g., Azure Storage, SQL Server, etc.).

  • Enter the connection details and click on Create.

3. Create a Self-Hosted Runtime

  • In the Azure Data Factory, click on Integration runtimes.

  • Click on New integration runtime.

  • Select Self-hosted integration runtime.

  • Enter a name for the runtime and select the region.

  • Click on Create.

  • Follow the instructions to install the self-hosted runtime on a machine.

4. Create Private Endpoints

  • In the Azure Data Factory, click on Private endpoints.

  • Click on New private endpoint.

  • Select the data source that you want to create a private endpoint for (e.g., Azure Storage, SQL Server, etc.).

  • Enter the connection details and click on Create.

5. Create a Release Pipeline in Azure DevOps

  • In Azure DevOps, create a new project.

  • Add a YAML file to the project.

  • In the YAML file, define the steps for the release pipeline.

  • Building the data factory project

  • Deploying the data factory project to the Azure Data Factory

  • Triggering a data pipeline run

  • Save the YAML file and commit it to the repository.

  • Setting up override parameters as when the release pushed from Dev environment to Staging and Production, the SQL server names within the linked services and private endpoints changes as per the environment.

6. Test the Data Factory

  • Trigger a data pipeline run from the Azure Data Factory.

  • Verify that the data pipeline runs successfully.

― Monitoring and Logging azure tenant

Leveraging Azure Log Analytics for Enhanced Monitoring and Troubleshooting

Azure Log Analytics is a powerful tool that enables us to monitor the health and performance of our Azure resources, identify and troubleshoot problems, gain insights into usage and performance trends, and create custom reports and dashboards.

To enhance our monitoring capabilities, we integrated Azure Log Analytics with our applications, allowing us to ingest logs for easier analysis and faster troubleshooting. This eliminated the need to log into individual applications and run commands like Tomcat logs, Apache logs, Umbraco logs, etc.

We configured our resources by enabling diagnostics settings and pushing the logs to our central Log Analytics workspace. This centralized logging solution streamlined our monitoring processes and provided a comprehensive view of our systems, enabling us to identify and resolve issues more efficiently.

― Automating our quarterly user access review

Utilizing Azure DevOps Pipelines for Automated User Access Management

We implemented Azure DevOps pipelines to automate the execution of Python scripts that extract user information from various sources, including Entra ID, SQL Databases, RBAC access to resources within our tenants, and custom applications.

This automation ensures that any departing employees (leavers) are promptly removed from our Azure tenants and custom applications, enhancing security and compliance by preventing unauthorized access to sensitive data and resources.

― containerization existing environment

Exploring Containerization with Azure Kubernetes Service (AKS)

We are investigating the potential benefits of containerization for our client learning platforms, which currently rely on web apps and functions. To this end, we have migrated our staging environment to Azure Kubernetes Service (AKS) to assess the advantages it offers in terms of setup time and infrastructure management.

Personally, I am actively pursuing the Certified Kubernetes Administrator (CKA) certification to deepen my understanding of this technology and contribute more effectively to the team's efforts in this area.

― Building self hosted Azure devops infrastructure

Migrating to Self-Hosted Azure DevOps for Enhanced Security and Control

In response to client requirements for heightened security, we are transitioning from Microsoft-hosted Azure DevOps to a self-hosted Azure DevOps environment. This migration will enable us to maintain greater control over our infrastructure and ensure that all components are securely locked down within our virtual network.

To facilitate the execution of Terraform scripts for building Azure environments, we will employ self-hosted agents running in containers. These containers will be built using Docker images stored in Azure Container Registry and subsequently deployed as Azure Containers.

By implementing these measures, we will achieve a highly secure and tightly controlled infrastructure that aligns with our client's stringent security requirements. Private endpoints will be utilized to facilitate secure communication between various components within our virtual network.

― Azure Policy

Collaborated with the team to implement Azure Policy for Resource Creation Controls

  • Enforced restrictions on resource creation to specific Azure regions to optimize resource placement and comply with regulatory requirements.

  • Implemented size restrictions for resource creation to ensure efficient resource utilization and cost optimization.

  • Enforced mandatory tagging for all newly created resources to facilitate cost management and automation processes. This tagging discipline enables us to effectively track and manage resource costs and automate tasks based on resource attributes.