d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
Location
Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.
AI Software Application Engineer – Technical lead / Principal
d-Matrix is seeking an experienced AI Applications Engineer to drive the successful deployment and support of d-Matrix’s cutting-edge AI products and solutions, specifically in the realm of generative AI inference and AI/ML software support. In this highly technical role, you will work closely with customers and internal teams to resolve complex software, hardware, and firmware challenges related to AI workloads. The ideal candidate will have expertise in AI/ML infrastructure, with a focus on inference solutions and performance optimization for data center environments. This position requires a strong blend of engineering acumen and customer-facing skills to ensure the seamless deployment and continued success of our products.
What You Will Do:
Customer Enablement & Support: Provide expert guidance and support to customers deploying generative AI inference models, including assisting with integration, troubleshooting, and optimizing AI/ML software stacks. Respond promptly to customer queries, perform root cause analysis, and develop timely resolutions for complex issues.
AI/ML Inference Optimization: Work directly with customers to understand their generative AI inference needs and deliver solutions that maximize performance across their AI workloads. Collaborate with customers to implement best practices for model deployment and inference tuning.
System Design & Consultation: Conduct design reviews and provide consultation on AI/ML infrastructure, focusing on optimizing systems for generative AI workloads in datacenters. Develop reference solutions and technical documentation tailored to the needs of AI/ML applications.
AI/ML Software Stack Installation & Validation: Lead the installation, configuration, and bring-up of d-Matrix’s AI software stack. Perform functional and performance validation testing, ensuring that generative AI models run efficiently and meet customer expectations.
Collaboration on Technical Collateral: Partner with internal engineering and product teams to produce developer guides, technical notes, and other supporting materials that facilitate the adoption of our AI/ML solutions by customers.
What You Will Bring:
Engineering degree in Electrical Engineering, Computer Engineering, Computer Science, or related field, with substantial experience in AI/ML software and infrastructure with 10+ years of experience in customer engineering and field support for enterprise-level AI and datacenter products, with a focus on AI/ML software and generative AI inference.
In-depth knowledge and hands-on experience with generative AI inference at scale, including the integration and deployment of AI models in production environments.
Experience with automation tools and scripting languages (Linux or Windows shell scripting, Python, Go) to streamline deployment, monitoring, and issue resolution processes.
Ability to communicate complex technical concepts to diverse audiences, from developers to business stakeholders.
Preferred Experience
Hands-on experience with AI/ML infrastructure accelerators (e.g., GPUs, TPUs) and expertise in optimizing performance for generative AI inference workloads.
Strong analytical skills with a proven track record of solving complex problems in AI/ML systems, including performance optimization and troubleshooting in AI/ML frameworks.
Extensive experience with the deployment of AI/ML frameworks such as PyTorch, OpenAI Triton, VLLM, and familiarity with container orchestration platforms like Kubernetes.
Excellent communication and presentation skills, with a demonstrated ability to guide customers through complex AI/ML system integration and troubleshooting.
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.
0 applies
0 views
Other Jobs from d-Matrix
Unix/Linux Infrastructure Engineer, Staff
Analog Mixed Signal Engineer - Intern
Machine Learning Intern
Compiler Software Engineer Intern
Compiler Software Engineer Intern
Similar Jobs
Staff Engineer, Entertainment
Responsible ML Data Science Engineer
AI Engineer
Engineer - Backend (Java)
Data Scientist
There are more than 50,000 engineering jobs:
Subscribe to membership and unlock all jobs
Engineering Jobs
60,000+ jobs from 4,500+ well-funded companies
Updated Daily
New jobs are added every day as companies post them
Refined Search
Use filters like skill, location, etc to narrow results
Become a member
🥳🥳🥳 401 happy customers and counting...
Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.
To try it out
For active job seekers
For those who are passive looking
Cancel anytime
Frequently Asked Questions
- We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
- We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
- We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
- We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
- Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
- Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
- Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅
What Fellow Engineers Say