Project and POCs
EC2 | Multi AZ Deployment | Load Balancing
(Deploy multiple EC2 instances in different AZs and experiment with load balancing, target groups and understand the impact of instance failure)
Storage | Volumes, S3, CLI
(Add volumes to EC2 instance, migrate data from one volume to the other, write a CLI to upload documents to S3 from local machine)
Log aggregation| Quick data analysis
(Collect logs from different EC2 instances and write SQL queries on CSV data hosted in S3)
Creating a file share & sync solution using ownCloud and AWS
Scenario -Solving the Dropbox Problem
According to recent research, 40-75% of employees are using Dropbox to share files inside and outside of their businesses. Half of those Dropbox users do this even though they know it's against the rules. More than 40% of businesses have experienced the exposure of confidential information and the estimated average cost of a data breach equaled $5.5 Million in 2011.
These files, containing sensitive company and customer data, are stored in a public cloud outside of the businesses' control -possibly even outside of the country. The potential for data leakage and security breaches is enormous and companies need to stay compliant with their own policies and procedures for security and governance
Managed services on AWS
RDS | EC2 database program
(Create a MySQL instance using RDS and access it using a custom program from an EC2 instance using an appropriate role)
Building an Automated Business Process using Managed Services on a Public Cloud Creating an event triggered business process leveraging multiple managed services from AWS
Scenario - The Extended Enterprise
In the connected world, it is imperative that the organizations be interlinked with their customers and vendors. This process has been very sluggish, manual, batch based and prone to failures. Such integration design has lead to impaired decision making and delay in detection of fraudulent actions.
The objective of this project is to create an automated, event based real time process that does not have these limitations. Data should flow rapidly from the source to the destination.
You will use and leverage multiple managed services available from a public cloud platform (AWS in this case) to achieve this.
Big Data Management on Cloud
Cassandra Setup | Master-less arch concepts
(Install multi-node Cassandra cluster, induce failure, create a keyspace/table and access from the client)
Big Data with Hive on EMR
(Use EMR to analyze Big Data with Apache Hive. Get familiar with the cluster
mode of EMR)
Course: Container and Microservices
Docker | Images, containers, scripts
(Setup an instance with Docker, create multiple containers from existing images, create a custom image using Dockerfile)
Deploying a web application to ECS
Deploy a Java web application on AWS Elastic Container Service. The web application needs to be bundled as a Docker image running on Apache Tomcat.
"The material shared in this document is proprietary. It is not to be distributed or shared except with the individual with whom it was directly sent."
Project Scenario
In the last 2 decades, small and large enterprises have invested heavily in developing bespoke applications. Since these applications have been built and enhanced over a period of time, they are complex and any form of reengineering to convert it to smaller modularized independently hosted services is difficult.
With the advent of cloud and containerization, these organizations are looking to take advantage of the predictable packaging of these applications and leverage the managed container services from the cloud.
The objective of this project is to experience such a scenario and move a classical (simplified) web application to the cloud.
DevOps - Infrastructure
Automation on AWS
CloudFormation | AWS CLI
(Create web server instance in an AZ, Create Target Group, Create Load
balancer)
CI/CD | Code Pipeline (Setup a Git repository, checkin a sample codebase, specify build and deployment rules, automate the whole process using pipeline)
Private Cloud
OpenStack Single Node Deployment on EC2
(Deploy and configure a single node OpenStack installation on EC2)
WOS File Storage to AWS S3
Introduction
This design document provides the details of file upload Migration from WOS files storage to AWS S3 bucket logic. In-addition, This WOS enhancement provides the capability to upload the image/document/video and store the files in AWS S3 Bucket instead of local drive.
Detail Design
Network Diagram WOS Application File Storage to AWS S3
Figure: Network Diagram WOS Application - Image Upload to AWS S3 Bucket
There will a total of 5 AWS S3 Buckets will be there dedicated to each WOS environment. As follows
Environment
Bucket Name
User
Development
woss3dev
woss3devuser
QA
woss3qa
woss3qauser
UAT
woss3uat
woss3uatuser
Staging
woss3stg
woss3stguser
Production
woss3prod
woss3produser
Security
The images/videos will be stored on AWS S3 Bucket. The WOS application will access S3 over the internet with Restrict access to S3 buckets or object. The access will be secured by:
CPSL would use RHS AWS account as a separate account for CPSL. So that Usage/Billing can be identified easily.
There will be a separate VPN gateway for CPSL account. The system will not use RHS VPN tunnel.
AWS Identity and Access Management (IAM) user, Bucket policies that specify the users that can access specific AWS S3 Bucket
Access key ID and Secret access key
Use encryption to protect data
Pre-requisite
AWS S3 Bucket should be existing for dedicated AWS of WOS.
AWS IAM user should exists with "AWSCodeStarFullAccess" & "AmazonS3FullAccess"
Bucket Policy should be there to allow WOS environment firewall IP to allow S3 for communicating.
Note: NIIT would setup environment. Later CPSL would be responsible for setting up / maintaining for task mentioned for above points 1.2.1 and 1.2.2
AWS API SDK
Following AWS libraries would be required for Java program
aws-java-sdk-core\1.11.603\aws-java-sdk-core-1.11.603.jar
aws-java-sdk-s3\1.11.603\aws-java-sdk-s-.jar
aws-java-sdk-s3\1.11.603\aws-java-sdk-s-.jar
File upload from desktop and Mobile Application Changes
Upon successful file upload from WOS application, System keeps the file upload path to Temp path of WOS storage and marked the file upload status as "P" - Pending to Upload Application, Tag - "D" - Desktop, "M" - Mobile Upload Time as Null.
File upload from WOS application to AWS S3
Upon a successful file upload into WOS temp storage. WOS will invoke Asynchronous java call.
The code will fetch all pending (Marked as “P”) files list uploaded from the WOS database along with files path.
The system will fetch Access Key ID & Secret access key will be stored in “WOS database”/“credential config” for connecting WebLogic Application Server to connect AWS. Similarly, will get AWS region, S3 bucket name stored in” WOS database”.
Implemented logic will upload all fetched listed files one by one from the WOS file system to designated AWS S3 Bucket.
Upon Successful file upload of every object WOS code will mark the file status as complete “C” or error “E” (in case of exception) in WOS database.
Upon successful file upload system would update the file upload folder path to Actual S3 Bucket path.
Please refer Appendix 1 for List of existing WOS application functions impact for File Upload.
Retrieve / Delete Files from WOS application & AWS S3
The process initiates upon user request to view the uploaded files.
The system will fetch list of uploaded files from the WOS database along with the information of File Uploaded status and Path & uploaded from (WOS & Mobile Application).
The system will fetch the files based on the file uploaded status and path. The list will contain the list of the file uploaded to Cloud & on Premises.
The system will connect to AWS S3 based on define access Bucket policy, Access key ID, Secret access key, AWS region & S3 Bucket Name.
The system will render the uploaded file to the user. User may click on the selected file to enlarge or download.
User will also be allowed to multiple file select and delete the selected file. The system will prompt the user for delete confirmation for selected files.
Upon confirmation for “Yes/OK” - System will delete the selected file from both on Premises or from designated AWS S3 Bucket path and refresh the remaining listed of the uploaded file list to the user.
Note: Delete function would be a hard delete and there will no scope for the user to revive it back after confirmation of deletion.
Please refer Appendix 2 for List of existing WOS application functions impact for view image popup.
Upload existing WOS File Storage to AWS S3
This is the requirement to upload all existing storage of 7 years which is more than 600 GB files data. AWS CLI would be used to complete this task as Multipart upload.