Aditya S.

Data Engineer

140 dollar

My experience

More

Credit Saison IndiaFebruary 2021 - Present

Department : Engineering

Project : Vaultron (LMS - Automated Data Pipeline )

TECHNICAL SKILLS:

Operating System : MacOs, Linux/Unix

Languages / Framework : Python

Traditional Data : ETL, SQL, Data Modeling, Data Warehousing, Data Integration

Data Formates: CSV, Parquet, Structured

Orchestration: AWS Step Functions

AWS Services : Amazon EC2 (Elastic Cloud Compute) , Amazon Glue (Serverless ETL Service), Amazon Redshift (Cloud Data warehouse), Amazon RDS (Relation Database Service), Amazon S3, Amazon CloudWatch, Amazon Athena, AWS Lambda, AWS SQS, AWS SNS, AWS Aurora, AWS Cloudformation, AWS ECS

Databases : PostgreSQL

Tools/Other : Pycharm, Git, Bitbucket, JIRA, Confluence, Jenkins, Docker.

More

Sun Life FinancialOctober 2018 - February 2021

Team : Data and Business Intelligence Services (DBIS)

Project : Sunrise (IFRS'17 ETL Development - Actuarial Stream)

TECHNICAL SKILLS:

Operating System : Linux/Unix, Windows

Languages / Framework : Python, Java

Traditional Data : ETL, SQL, Data Modeling, Data Warehousing, Data Integration

Big Data : Hadoop, HDFS, Spark, Ambari, Pyspark

Data Formates: Avro, csv, parquet, orc, structured

Orchestration: Apache Airflow, AWS Step Functions

AWS Services : Amazon EC2 (Elastic Cloud Compute) , Amazon Glue (Serverless ETL Service), Amazon Redshift (Cloud Data warehouse), Amazon RDS (Relation Database Service), Amazon S3, Amazon CloudWatch, Amazon Athena, AWS Lambda, AWS SQS, AWS SNS.

Databases : PostgreSQL, SqlServer

Tools/Other : Pycharm, Git, Bitbucket, JIRA, Confluence, Jenkins, Docker.

GemaltoJuly 2018 - October 2018

GemaltoJanuary 2018 - June 2018

My stack

Languages

Python

Technologies

Amazon Web Services (AWS)

Business Intelligence

Data Warehouse

Big Data

Big Data Architecture, Spark, Hive, PySpark

My education and trainings

B.Tech (Computer Science and Engineering) - Lovely Professional University2014 - 2018