Job Description
Description
Location: Toronto, Ontario
Our client is reimagining how AI infrastructure powers real-time, distributed systems—and we’re hiring a Lead AI/ML Engineer to architect and drive the intelligence layer of their platform. In this role, you’ll own the design and implementation of the AI orchestration layer, inference engine, and ML integration pipelines. You’ll lead the development of agent-based intelligence that dynamically generates business logic, performs real-time semantic analysis, and integrates with low-level systems to enable next-gen AI workflows.
This is a high-impact position at the intersection of systems engineering, LLM infrastructure, and applied AI—ideal for someone who’s worked on cutting-edge ML systems and wants to shape how AI actually runs in production.
What You’ll Do:
– Architect and lead development of UniFlow’s AI orchestration and inference layers.
– Design and implement ML integration pipelines that connect AI models with distributed systems and runtime logic.
– Develop agent-based intelligence features that enable automatic business logic generation and real-time semantic analysis.
– Build abstractions and tooling that bridge the gap between AI models and system-level services.
– Collaborate closely with Engineering and Product leadership to drive end-to-end delivery.
– Contribute to defining a Semantic Kernel that underpins advanced orchestration, LLM integration, and knowledge representation.
Special Perks:
Why Join?
Own the AI layer of a high-performance, modular platform built for real-world impact.
Work on hard, high-leverage problems at the intersection of systems and intelligence.
Collaborate with deeply technical leaders across compiler infrastructure, distributed systems, and ML.
Competitive comp, strong equity, and a chance to shape foundational AI tech from the ground up.
Must Have Skills:
What You Bring:
– 6+ years of experience in applied machine learning or ML infrastructure roles.
– Deep expertise in PyTorch, TensorFlow, and Hugging Face Transformers.
– Strong understanding of LLM architecture, inference optimization, and fine-tuning.
– Experience building ML/AI pipelines in production environments, ideally with real-time or distributed systems.
– Background in LLM infrastructure, semantic kernels, or agent-based AI frameworks.
– Systems thinking—you know how to build ML systems that are efficient, maintainable, and deeply integrated.
Nice to Have Skills:
Bonus Points For:
– Experience with multi-agent systems, prompt orchestration, or semantic embeddings.
– Familiarity with Kubernetes, cloud-native AI tooling, or on-premise ML deployment.
– Contributions to open-source ML frameworks or LLM research.
#J-18808-Ljbffr
Company
GuruLink
Location
Toronto
Country
Canada
Salary
125.000
URL