Data Engineer
Trantor
Posted on: March 17, 2026
Data Engineer – AWS Data Platform
Job Summary
We are looking for a highly skilled and motivated Data Engineer with strong expertise in AWS data services to join our data platform team. The ideal candidate will have hands-on experience designing scalable data pipelines, workflow orchestration frameworks, and large-scale data migration solutions.
This role will be responsible for building robust cloud-native data engineering solutions on AWS, migrating datasets from legacy systems and data warehouses, and ensuring secure and efficient data processing pipelines across distributed environments.
Key Responsibilities
AWS Data Pipeline Development
Design and implement scalable ETL/ELT data pipelines using AWS Glue, AWS Lambda, and AWS S3.
Build and maintain high-performance data ingestion frameworks for processing large-scale datasets.
Implement data pipelines for data warehousing and analytics platforms such as AWS Redshift.
Optimize storage and querying strategies using AWS S3 data lakes.
Data Workflow Orchestration
Develop and maintain data workflow orchestration frameworks using tools such as Apache Airflow or AWS Step Functions.
Automate complex workflows including data ingestion, transformation, validation, and loading processes.
Build reusable and configurable workflows to support multiple data processing use cases.
Data Migration & Integration
Lead data migrations from legacy data warehouse technologies to modern AWS data platforms.
Perform data migration from RDBMS systems (e.g., MySQL, SQL Server, Oracle) to AWS S3 or AWS Redshift.
Design scalable migration frameworks for large datasets with minimal downtime.
Integrate data sources from enterprise applications and external systems.
Data Security & Governance
Implement secure data pipelines using AWS security best practices.
Manage access control and data governance using AWS IAM and Lake Formation.
Ensure data encryption, access management, and compliance across all data platforms.
Performance Optimization & Monitoring
Monitor data pipelines and troubleshoot performance issues.
Optimize ETL workflows for scalability, reliability, and cost efficiency.
Implement logging, monitoring, and alerting mechanisms for data pipelines.
Required Skills & Qualifications (Must Have)
5+ years of experience in Data Engineering or Data Platform development
Strong hands-on experience with:
AWS
AWS Glue
AWS S3
AWS Lambda
Experience with Data Workflow Orchestration tools such as Apache Airflow or AWS Step Functions
Experience performing data migrations from other data warehouse technologies
Experience performing data migrations from RDBMS systems to AWS S3 or AWS Redshift
Strong expertise in Python and SQL for building scalable data pipelines
Solid understanding of ETL/ELT concepts, data partitioning, and distributed data processing
Experience working with version control systems such as GitLab or Bitbucket
Strong debugging, analytical thinking, and problem-solving skills
Basic understanding of Object-Oriented Programming concepts
Industry Knowledge & Experience
Experience building cloud-native data engineering solutions on AWS
Experience with data warehouse architectures and large-scale analytics platforms
Hands-on experience with data extraction, transformation, and migration frameworks
Experience working in high-volume dat
About Company
Trantor
Haryana ,IN
https://www.trantorinc.com
Your next job is waiting
Create your profile and start applying in minutes.