Lunit

(Seoul) Cloud Architect · AI Platform

Seoul, South Korea Ontario
Go Streaming Kafka Bash Java GCP Python Deep Learning Spark Shell API Machine Learning Terraform AWS Azure
Description

"Conquering cancer through AI"

Lunit, a portmanteau of ‘Learning unit,’ is a medical AI software company devoted to providing AI-powered total cancer care.
Our AI solutions help discover cancer and predict cancer treatment outcomes, achieving timely and individually tailored cancer treatment.

🗨️ About the Team

  • The AI Platform department is responsible for supporting various data and infrastructure-related aspects of the AI development process, from data acquisition to curation to enrichment to infrastructure.
  • We support both the cancer screening/radiology and oncology sides of the company, working towards a unified multi-modal system.
  • AIP team is an international team of enthusiastic researchers/engineers with diverse interests. We foster a friendly and open environment, encouraging idea/knowledge sharing and collaboration in software engineering. Products and technologies we are developing for the future include data management and governance, Smart data annotation, federated learning, model customization, and foundation model development.

🗨️ About the Position

  • Build the Future of Medical Technology
    As our Cloud Architect, you’ll be the core of Lunit’s long-term plan to build the next-generation of machine learning models. Your expertise in technology, product, and the software development lifecycle relevant to cloud will be essential in delivering Lunit’s new AI and data platform. It will unite data across the whole medical domains globally to produce custom-tailored AI models to save lives in ways that have never been done before. Your state-of-the-art architecture will push the cutting edge of medicine towards personalized healthcare, based on patients' holistic medical profiles.
  • Connect Specialists to Deliver the Impossible
    To deliver our next-generation AI initiatives, your ability to resolve complex technical challenges, synopsize various technical landscapes , and coordinate multiple disciplines will be crucial in realizing our vision for a dynamic, scalable cloud infrastructure and architecture. This role demands proven expertise in global distributed SaaS/PaaS systems to create a robust, high-volume, and high-quality cloud platform. Through the cutting-edge, federated data cloud architecture, you will be a spearhead to facilitate the development of our advanced foundation models. You’ll be the intersection among various specialist teams - including data engineers, infrastructure experts, cybersecurity and data privacy experts, product teams, data scientists, software engineers, and business development teams, to design a comprehensive system architecture for our mission.
  • Save Lives
    You’re humble but ambitious, knowing ego can’t get in the way of your goals. You face risks head on, knowing they must be overcome to cure the next patient’s cancer. Your design will strengthen our AI models towards visionary goals. You are constantly learning, trying to find that next piece of information to solve the next problem. In the end, at Lunit, you’ll be a major part of conquering cancer through AI.

🚩 Roles & Responsibilities

  • Architecture Design: Within the AI Platform team work closely with MLOps, TPM, and Backend to support and optimize the current data pipeline development, and then to design and build a new highly scalable, fault-tolerant distributed global Data/AI platform that can handle massive amounts of data and AI inferences across multiple regions
  • Tech Advisory: Assist VP or C level in strategic decisions of technologies in the cloud, and provide advice for best engineering practices and tech stack of Lunit’s cloud service
  • Architecture Review Committee: Set the committee for architect review processes and standards to review the architectures of various SaaS/AIaaS products. Also, lead the committee to identify and study the emerging technologies relevant to cloud, data collection, data pipeline, and XOPS (DevSecOps, DataOps, MLOps) to enhance Lunit’s cloud services and evangelize them
  • Performance Optimization: Optimize the performance of data collection and ML inference services, ensuring low latency and high throughput for data processing and inferences
  • Project Collaboration: Support PM in technical design and solutions for large-scale projects, coordinating teams across different regions for technical relevant tasks, and handling stakeholder expectations related to technologies
  • Coach: Provide training and coach to engineers to strengthen the tech competency of Lunit’s engineers

🎯Minimum Qualifications

  • Overall 10 to 15 years working experiences
    • 7~9 years experience in the engineering of Cloud computing, DevSecOps, and DataOps
    • 3 ~ years in the architect role for global distributed cloud architect design
  • Proficiency in business English
  • Overall understanding of the healthcare industry and passion for artificial intelligence
  • Master’s degree in computer science, or related STEM field; or equivalent experience
  • DevOps/DevSecOps
    • CI/CD: Familiar with CI/CD and experiences in engaging security policy and practices to set up DevSecOps for SDLC, and automated deployment strategies for managing a global platform
    • IaaC: Expertise in IaaC with Terraform or similar
    • Monitoring and Analyzing: Expertise in setting up robust monitoring and logging systems to track and analyze the health and performance of your service, identify potential issues, and troubleshoot problems
  • Cloud platforms: Deep knowledge of at least one major cloud platform (AWS, Azure, GCP, …etc; 2 or more of them will be a plus) for designing, deploying, and managing the infrastructure, aka, in compute, storage, networking, security, and containerization services.
  • Scalability and High Availability: Expertise in designing and implementing architectures that can handle massive data volumes, and acquisition, and ensure high availability (minimal downtime) for services of data. (involves autoscaling, clustering, redundancy, load balancing, and fault tolerance.
  • Data Storage and Management: solid experience in large-scale data collection and processing by various data storage solutions (e.g. object storage, relational databases, NoSQL databases, data lake, data mart, data warehouse, data lineage)
  • Data Pipelines and Streaming: Familiar with building data pipelines of DataOps to ingest, process, de-identification, and transform data from various sources in real-time or batch. Experiences in at least one of airflow, mlflow, prefect, and data streaming like Apache Kafka, Apache Spark, or cloud-native data processing services.
  • Security and Compliance: Deep understanding of cloud security best practices to protect sensitive data, de-identification and encryption techniques, access control, and vulnerability management. Knowledge of relevant compliance regulations for data protection, and security best practices for handling sensitive data across different jurisdictions, (e.g., GDPR, HIPAA, ISO 27001)
  • Distributed Systems: Knowledge of principles of distributed systems for building reliable and scalable data collection and Model inference services. Clear concepts like distributed consensus, replication, and fault tolerance
  • Excellent organizational and communication skills
  • Experience in working on large-scale web-related or data system technologies
  • Strong problem-solving and critical-thinking skills
  • Familiar with the software development lifecycle
  • Can identify unforeseen technical risks and then discuss and define practical solutions to resolve
  • Able to peacefully and effectively deal with conflicts or confrontations among teams and customers.

🎯Preferred Qualifications

  • OS and Shell scripts: in-depth of Linux and shell scripts (at least one of Bash, Ksh, …etc), as well as Linux CLI and tools for operation in system and network monitoring and troubleshooting
  • Programming: Proficiency in languages like Python, Java, or Go for developing custom solutions and automation scripts
  • Understanding Data Collection Needs: Ability to analyze data collection requirements, identify business objectives, and translate them into technical design and solutions
  • Cost Optimization: Expertise in designing cost-effective cloud architectures that optimize resource utilization and minimize unnecessary expenses
  • API Design: Understanding of API design principles to create well-defined and easy-to-use APIs for your data collection service.
  • Network architecture: Strong understanding of global networking concepts, content delivery networks (CDNs), and optimizing data transfer across regions.
  • Deep knowledge of Data Mesh, Data consolidation, and Data Federation
  • Data governance and lifecycle management: Understanding of data governance principles and implementing effective data lifecycle management strategies
  • Project management: Skills in managing large-scale tech-initiative projects, coordinating engineering teams across different regions, and handling stakeholders' tech expectations
  • Personable but confident when identifying and discussing problems
  • Strong writing, and documentation skills
  • Strong presentation skills
  • Collaboration: Ability to collaborate effectively with PM, developers, data scientists, security professionals, and other stakeholders to ensure successful project execution.
  • Communication: Strong communication skills to clearly articulate technical concepts to technical and non-technical audiences.
  • Continuous learning: Proved a commitment to continuous learning and staying updated with the latest advancements in Cloud, Coding, Data, and AI.

🏅 Preferred Experiences

  • Korean skill
  • Experience in IT and medical industries
  • Experience in working on large web-related and data system technologies

📝 How to Apply

  • CV (resume, free format, in English)

🏃‍♀️ Hiring Process

  • Document Screening → Introductory Interview (Online) → Design Assignment → Competency-based interview → Culture-fit Interview → Onboarding
    • All interviews are conducted in English.
    • After the final interview, we may proceed with reference checks if needed.
    • During the Competency-based interview, the candidate will present for 20 minutes 1 or 2 past projects. After this, the interviewers will ask questions about the past projects and the assignment.

🤝 Work Conditions and Environment

  • Work type: Full Time
  • Work location : Lunit HQ (5F, 374, Gangnam-daero, Gangnam-gu, Seoul)
  • Salary: After Negotiation

🎸 ETC

  • If you misrepresent your experience or education or provide false or fraudulent information in or with your application, it may be grounds for cancellation of the employment.
  • Lunit is committed in providing the preferential processing to those eligible for employment protection (national merits and people with disabilities) relevant to related laws and regulations.
Lunit
Lunit
Artificial Intelligence (AI) Computer Vision Health Diagnostics Image Recognition Machine Learning Medical Device Software

0 applies

1 views

There are more than 50,000 engineering jobs:

Subscribe to membership and unlock all jobs

Engineering Jobs

60,000+ jobs from 4,500+ well-funded companies

Updated Daily

New jobs are added every day as companies post them

Refined Search

Use filters like skill, location, etc to narrow results

Become a member

🥳🥳🥳 401 happy customers and counting...

Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.

To try it out

For active job seekers

For those who are passive looking

Cancel anytime

Frequently Asked Questions

  • We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
  • We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
  • We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
  • We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
  • Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
  • Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
  • Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅

What Fellow Engineers Say