Spiden

Senior DevOps Cloud engineer (100%)

Pfäffikon, Switzerland
C++ Go JavaScript Java Puppet AWS Azure Python Kafka PowerShell Ansible Chef Microservices Git Docker Hadoop Spark R GCP Kubernetes Bash Terraform Machine Learning
This job is closed! Check out or
Description
cloud,reliability,machine learning,system architecture,medtech,biomarker,weareable,diabetes,glucose monitoring,devops,kubernetes,jenkinsWhat could you build on top of real-time and non-invasive Glucose monitoring? That is what Spiden is all about, and we want to beat Apple on it:
https://www.handelsblatt.com/unternehmen/industrie/start-up-check-schweizer-start-up-spiden-will-apple-bei-blutzucker-technik-ueberholen/29293274.html 
 
Machine learning, personalized health, and predictive medicine. If these areas resonate with you, join us to work on foundational technologic and scientific challenges at Spiden. We are a Swiss MedTech venture with the vision to use state-of-the-art detection techniques to continuously monitor and learn from a wide range of vital indicators, to better manage chronic diseases, to customize critical treatments and, to improve your health. 
 
Using proprietary optical sensors, Spiden is building a cutting-edge biomedical data generation pipeline to train medical-grade Machine Learning algorithms to infer glucose transcutaneously from spectral measurements. To achieve our vision, our team and advisory board consist of world experts coming from top academic institutions (ETH, EPFL, Columbia, Princeton, or Harvard among others) and industry leaders (Baxter, Roche, Lonza). We are looking for a talented, experienced, voraciously curious, and self-driven professional with great opportunities for growth.  

Responsibilities

As a DevOps Cloud Engineer you'll guide, design and implement Data and ML pipelines on the Google Cloud Platform. You will work with multi-modal data with internal teams at Spiden, our partners (such as universities, clinical partners and research centers) and equipment manufacturers to design large-scale data processing systems. In this role, you will be part of the Machine Learning Research & Engineering team, communicating frequently and coordinating closely with all RnD teams, including Biomedical Science, Biochemistry, Biophotonics and Electrical Engineering, in our office and labs in Pfäffikon (Schwyz).

  • Improve and maintain our cloud infrastructure with solutions that meet the organization's scalability, availability, performance, and security requirements.
  • Evaluate and select appropriate cloud services and technologies to build a robust and scalable cloud infrastructure.
  • Develop cloud infrastructure blueprints, including network architecture and security controls. Implement and configure virtual networks, subnets, and routing tables to establish secure and isolated environments within the cloud infrastructure.
  • Set up and configure load balancers, auto-scaling groups, and other scalability mechanisms to ensure optimal performance and resource utilization.
  • Implement security measures, such as network security groups, firewalls, and access controls, to protect the cloud infrastructure from unauthorized access and potential threats.
  • Configure and manage identity and access management (IAM) policies and roles to control user access and permissions within the cloud environment.
  • Implement backup and disaster recovery strategies to ensure data integrity and availability in case of system failures or data loss.
  • Collaborate with cross-functional teams, such as developers and scientists, to ensure alignment and adherence to cloud infrastructure best practices.
  • Stay updated with the latest trends, technologies, and best practices in cloud infrastructure design and security to continuously improve the organization's cloud capabilities.
Minimum qualification
  • Msc in Computer Science, Mathematics, Physics, Engineering or a related technical field, or equivalent practical experience.
  • 6 years of experience in writing software working with at least one compiled and one interpreted language, preferably Python, alternatively C, C++, Go, JavaScript, Java
  • A minimum of 5 years of hands-on experience in developing solutions using public cloud providers.
  • DevOps experience: Familiarity with DevOps practices and tools, including continuous integration and continuous delivery (CI/CD) pipelines, version control systems (e.g., Git), and configuration management tools (e.g., Ansible, Chef, Puppet). Experience implementing DevOps principles to enhance development and deployment workflows.
  • Cloud security expertise: Deep understanding of cloud security principles, best practices, and compliance frameworks. Experience implementing security controls, conducting security audits, and managing identity and access management (IAM) in cloud environments. Excellent verbal and written communication skills

Preferred qualifications

  • Advanced cloud certifications: Possession of relevant certifications that demonstrate a high level of proficiency in cloud technologies. Examples include AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect, or Google Cloud Certified - Professional Cloud Architect.
  • Containerization and orchestration: Experience with containerization technologies such as Docker and container orchestration platforms like Kubernetes. Understanding of microservices architecture and its implementation in cloud environments.
  • Strong scripting and automation skills: Proficiency in scripting languages like Python, PowerShell, or Bash, and experience with infrastructure-as-code tools such as Terraform or CloudFormation. Ability to automate infrastructure deployment, configuration management, and other repetitive tasks.
  • Big data and analytics knowledge: Experience working with big data technologies such as Hadoop, Spark, or Apache Kafka. Familiarity with data warehousing, data lakes, and analytics tools, enabling the design and implementation of data-driven solutions in the cloud.

Desirable:
  • Experience leading cross-disciplinary data projects with fast iterations and within changing experimental setups (such as academic, laboratory or R&D environment)
  • Experience in the healthcare or pharma industry working with medical devices or in a biomedical laboratory environment will be a plus

IMPORTANT: we cannot sponsor working permits for non-EU / EFTA nationals for this role. Applications that do not fulfill this criteria will be automatically rejected.Work/Life Balance
Working at a growing MedTech start-up is demanding and our goals are ambitious, which is why our team puts a strong emphasis on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. Therefore, we offer flexibility in working hours and encourage you to find your own balance between your work and personal life.

Values and Mission are important at Spiden, as the ultimate goal is to improve people's well-being and we aspire to live that. We will have the chance to discuss value and mission during the interview process.

Diversity
With 18 nationalities in the company, we strive to build a diverse and exciting environment. Within the SMLE team we take special care of gender diversity - currently balanced at 50/50 among the permanent employees- and equal opportunities.

Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We want you to grow with Spiden.

Amazing team!
In this role, you will be part of the Machine Learning Research & Engineering team, currently composed by 13 awesome colleagues from which you will have the opportunity to learn a lot professionally, but also enjoy great conversations and fresh and well informed points of view. Additionally, you will be working closely with all RnD teams, including Biomedical Science, Biochemistry, Biophotonics, and Electrical Engineering. 

Don't forget to check the team page on our website!
The interview process consists of 3 stages:
  1. Introduction call (20 min - led by hiring manager - remote with video)
  2. Technical Interview (60-90 min - led by engineers from the team - role specific - remote with video)
  3. On-site interview (90 - 120 min - Culture & Team fit, Lab tour, if role requires, meet members from other teams - in person in our office and lab in Pfäffikon)

There are more than 50,000 engineering jobs:

Subscribe to membership and unlock all jobs

Engineering Jobs

50,000+ jobs from 4,500+ well-funded companies

Updated Daily

New jobs are added every day as companies post them

Refined Search

Use filters like skill, location, etc to narrow results

Become a member

🥳🥳🥳 223 happy customers and counting...

Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.

Cancel anytime / Money-back guarantee

Wall of love from fellow engineers