Strattmont
Data Engineer (Remote)
Data Engineer | Strattmont | Philippines
Data Engineer | Strattmont | Philippines
We have now ventured into building exciting ease of use data visualization solutions on top of Daton. And lastly, we have a world class data team which understands the story the numbers are telling and articulates the same to CXOs thereby creating value.
Where we are Today
We are a boot strapped, profitable & fast growing (2x y-o-y) startup with old school value systems. We play in a very exciting space which is intersection of data analytics & ecommerce both of which are game changers. Today, the global economy faces headwinds forcing companies to downsize, outsource & offshore creating strong tail winds for us. We are an employee first company valuing talent & encouraging talent and live by those values at all stages of our work without comprising on the value we create for our customers. We strive to make the company a career and not a job for talented folks who have chosen to work with us.
The Role
We are seeking a highly skilled and motivated Data Engineer with expertise in Python or PySpark, and it’s good to have experience with DBT (Data Build Tool), to join our team. As a Data Engineer, you will play a crucial role in designing, building, and maintaining our data infrastructure in the cloud. You will collaborate with cross- functional teams to ensure data is collected, processed, and made available for analysis and reporting. If you are passionate about data engineering, cloud technologies, and have strong programming skills, we invite you to apply for this exciting position.
Responsibilities
- Data Pipeline Development: Design, develop, and maintain scalable data pipelines that collect, process, and transform data from various sources into usable formats for analysis and reporting.
- Cloud Integration: Leverage cloud platforms such as AWS, Azure, or Google Cloud to build and optimize data solutions, ensuring efficient data storage, access, and security.
- Python/PySpark Expertise: Utilize Python and/or PySpark for data transformation, manipulation, and ETL processes. Write clean, efficient, and maintainable code.
- Data Modeling: Create and maintain data models that align with business requirements, ensuring data accuracy, consistency, and reliability.
- Data Quality: Implement data quality checks and validation processes to ensure the integrity of the data, troubleshooting and resolving issues as they arise.
- Performance Optimization: Identify and implement performance optimizations in data pipelines and queries to ensure fast and efficient data processing.
- Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with reliable data sets.
- Documentation: Maintain thorough documentation of data pipelines, workflows, and processes to ensure knowledge sharing and team efficiency.
- Security and Compliance: Implement security best practices and ensure data compliance with relevant regulations and company policies.
Good To Have (Preferred Skills)
- DBT (Data Build Tool): Experience with DBT for managing and orchestrating data transformations.
- Containerization and Orchestration: Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Data Streaming: Knowledge of data streaming technologies (e.g., Kafka, Apache Spark Streaming).
- Workflow Management: Familiarity with data orchestration and workflow management tools (e.g., Apache Airflow).
- Cloud Certification: Certification in cloud services (e.g., AWS Certified Data Analytics, Azure Data Engineer).
- Data Governance: Understanding of data governance and data cataloging.
Qualifications
- Bachelor’s degree in computer science, Information Technology, or a related field (master’s degree preferred).
- Proven experience as a Data Engineer, with a focus on cloud-based solutions.
- Strong proficiency in Python and/or PySpark for data processing and ETL tasks.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Knowledge of data warehousing concepts and technologies (e.g., Redshift, BigQuery).
- Familiarity with data modelling and database design principles.
- Solid understanding of data integration and ETL best practices.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills to work effectively in cross-functional teams.
Join our team and be part of a dynamic environment where you can contribute to the development of innovative data solutions using the latest cloud technologies, programming languages, and DBT for efficient data transformations. If you are a passionate data engineer with these skills, we want to hear from you!
Show more
Show less
Related Jobs
See more All Other Remote Jobs-
NewSave
-
NewSave
-
NewSave
- Save
- Save
- Save