Obrimo Technologies (Formerly known as Salecino)
Aws Data Engineer (Remote)
Aws Data Engineer | Obrimo Technologies (Formerly known asSalecino) | India
Job Title: AWS Data Engineer
Experience Required: 6+ Years
Work Hours: 12 PM to 9 PM IST
...Aws Data Engineer | Obrimo Technologies (Formerly known as Salecino) | India
Job Title: AWS Data Engineer
Experience Required: 6+ Years
Work Hours: 12 PM to 9 PM IST
Job Description:
We are seeking an experienced AWS Data Engineer with over 6 years of hands-on experience to join our dynamic team. The ideal candidate will have strong expertise in building and maintaining data pipelines, with in-depth knowledge of AWS services, data orchestration, and automation processes. You will be working on complex ETL processes, managing data workflows, and ensuring efficient data integration across the organization.
Key Responsibilities:
- Develop, test, and deploy high-quality code using Python and PySpark to build scalable data pipelines.
- Design and implement complex SQL queries to manage and manipulate large data sets across multiple databases.
- Utilize AWS services such as AWS Glue, AWS Lambda, AWS Redshift, AWS Step Functions, Kinesis, SNS, SQS, and S3 to build and optimize data pipelines.
- Oversee the orchestration of data pipelines, ensuring seamless automation of workflows, whether on-premise or in the AWS cloud environment.
- Implement and manage CI/CD pipelines for smooth code integration, testing, and deployment across data platforms.
- Apply hands-on experience with ETL tools and frameworks to ensure robust data extraction, transformation, and loading processes.
- Collaborate with cross-functional teams to design and maintain data warehouses and ensure effective data management across multiple databases.
- Troubleshoot and resolve any issues related to data pipelines, workflows, and AWS infrastructure.
Skills and Qualifications:
- 6+ years of experience as a Data Engineer or similar role, with significant expertise in AWS cloud services.
- Strong programming skills in Python and PySpark.
- Proficiency in writing and optimizing SQL queries for complex data manipulations.
- In-depth experience with AWS Glue, Lambda, Redshift, Step Functions, Kinesis, SNS, SQS, and S3.
- Demonstrated experience in orchestrating data pipelines and automating workflows on AWS or on-premise environments.
- Knowledge of CI/CD pipelines and best practices for continuous integration and deployment.
- Hands-on experience working with ETL processes, tools, and frameworks.
- Experience in working with data warehouses and managing multiple databases.
Preferred Attributes:
- Ability to work in a collaborative environment with strong communication and problem-solving skills.
- Proven ability to handle complex projects and data architecture challenges.
- Proactive in identifying and implementing process improvements for data pipeline efficiency.
Show more
Show less
Related Jobs
See more All Other Remote Jobs-
NewSave
-
NewSave
-
NewSave
-
NewSave
-
NewSave
-
NewSave
-
NewSave
-
NewSave
-
NewSave
-
NewSave