Some of the exciting projects you will work on include:
- Driving the design, rapid experimentation, development, testing and deployment of data science models for flow forecast models and anomaly detection;
- Optimising and fine-tuning models in production, overseeing the continuous monitoring of deployed models, and effectively handling model and data drift promptly;
- Building robust pipelines for integrating data from diverse sources, including big geospatial data, ship mobility data, and document recognition;
- Researching and identifying methods, data sources, and features that will drive business impact and improve models’ accuracy in the current scope of ever-changing world-scale commodity trading;
As a senior data scientist, you will:
- Devise efficient solutions to tackle ML/Big Data challenges using relevant, up-to-date methods and technologies.
- Work across the stack to deliver new features end-to-end, from prototyping, to deployment and caring for data drift in production.
- Ensure optimal, cost-effective design decisions that improve performance and overcome scalability limits.
- Own meaningful parts of our service, demonstrating the ability to lead projects independently, have an impact, and grow with the company.
- Identify opportunities for novel projects and liaise with product teams to advance ideas into value-adding features.
- Actively share knowledge and document insights, effectively communicate complex concepts and analysis to technical and non-technical audiences, aiming to support continuous team improvement and drive collaboration.
- Act as a mentor for our junior data scientists, helping to accelerate their growth; you will act as the Tech Lead on some projects.
- Be part of a vibrant Machine Learning community in Kpler, tackling the whole spectrum of ML problems.
Our Machine Learning tech stack includes:
- Python, ML libraries (TensorFlow, pytorch, scikit-learn, transformers, XGBoost, ResNet), geospatial libraries (shapely, geopandas, rasterio), AWS, Postgres, Apache Airflow, Apache kafka, Apache Spark.
Mandatory requirements:
- You have at least 5 years of experience in the DS role, deploying models into production;
- You have proven experience delivering end-to-end ML solutions that produce business value.
- You are proficient in Python.
- You have expert knowledge of at least one cloud computing platform (preferably AWS).
- You are fluent in English.
Nice to haves but not mandatory:
- You have expertise on applications focusing on geospatial data and mobility analytics (highly desirable).
- You have proven experience with big data technologies, specifically Spark and Kafka.
- You have experience working with state-of-art ML pipeline technologies (such as MLflow, Sagemaker...) or building a ML pipeline by yourself (Docker, Kubernetes, Paperspace, Airflow...).
- You have a Ph. D. in a quantitative field (computer science, mathematics, physics, engineering...).
- You are familiar with the shipping industry and commodity trading.
- You are comfortable with software engineering best practices.
- You value code simplicity, performance and attention to detail.
- You have experience working in an international environment.
0 applies
3 views
Other Jobs from Kpler
Senior Data Scientist
Senior Data Scientist
Data Analyst
Senior Data Analyst
Similar Jobs
Software Development Engineer (MLE II)
Full Stack Software Engineer (Starshield)
Senior MLOps Engineer
Software Engineering-Sr Engineer
Cloud Engineering (GTS)-Senior Director
Sr./Staff Software Engineer- Self Service
There are more than 50,000 engineering jobs:
Subscribe to membership and unlock all jobs
Engineering Jobs
60,000+ jobs from 4,500+ well-funded companies
Updated Daily
New jobs are added every day as companies post them
Refined Search
Use filters like skill, location, etc to narrow results
Become a member
🥳🥳🥳 401 happy customers and counting...
Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.
To try it out
For active job seekers
For those who are passive looking
Cancel anytime
Frequently Asked Questions
- We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
- We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
- We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
- We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
- Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
- Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
- Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅
What Fellow Engineers Say