BioSpace
Development Operations Engineer (Remote)
Development Operations Engineer | BioSpace | UnitedStates
Development Operations Engineer | BioSpace | United States
Summary
The DevOps Engineer will be responsible for building a groundbreaking Lab Automation software platform. This role’s responsibilities extend beyond the automated cloud infrastructure to also include the CI/CD pipeline, the release cycle, and the local developer environment. This role will be responsible for managing a multi tenant SaS deployment while adhering to Pharma industry security policies and ensuring a cost efficient infrastructure. Must possess the ability to make key technical decisions to drive the design, build and integration of brand new infrastructure.
Responsibilities
- Design and architect cloud infrastructure: Develop and implement comprehensive cloud infrastructure solutions on AWS ensuring high availability, scalability, and security.
- Automate infrastructure provisioning: Utilize tools like Terraform, Ansible, or Cloud Formation to automate the creation, configuration, and management of cloud resources, reducing manual effort and increasing efficiency.
- Implement CI/CD pipelines: Build and maintain continuous integration and continuous delivery pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI to automate the software delivery process, enabling faster and more reliable releases.
- Collaborate with development and operations teams: Work closely with software engineers, qa engineers, and operations staff to ensure smooth integration and deployment of applications, providing technical guidance and support.
- Ensure security and compliance: Implement robust security measures to protect cloud infrastructure and data, complying with industry standards and regulations.
- Manage configuration management: Employ tools like Ansible, Puppet, or Chef to maintain consistency and compliance across various environments, ensuring that applications and services are deployed and configured correctly.
- Monitor and optimize system performance: Use monitoring tools like Prometheus, Grafana, or ELK Stack to track system performance, identify bottlenecks, and implement optimizations to improve response times and resource utilization.
Qualifications: Successful candidates will be able to meet the qualifications below with or without a reasonable accommodation.
Education Qualifications (from An Accredited College Or University)
- Bachelor’s Degree in IT related fields, Computer Science, Physics, Chemistry, Electrical Engineering and other sciences or engineering fields acceptable required
- Master’s Degree in IT related fields, Computer Science, Physics, Chemistry, Electrical Engineering and other sciences or engineering fields acceptable preferred
Experience Qualifications
- 7 or More Years AWS Cloud (VPC, EKS, EC2, Identity Center, RDS ) Scripting Language (i.e. Python, Perl) required
- 4 or More Years Kubernetes required
- 4 or More Years CI/CD Systems (i.e. Jenkins, ArgoCD, Github Actions) required
- 4 or More Years Terraform required
Travel
Ability to travel up to 5% Attend team and department meetings on Quarterly basis or as needed.
Daiichi Sankyo, Inc. is an equal opportunity/affirmative action employer. Qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
Show more
Show less
Related Jobs
See more All Other Remote Jobs- Save
- Save
- Save
- Save
- Save
- Save
- Save
- Save
- Save
- Save