About the Role
We’re looking for a highly skilled iOS Mobile Engineer with expertise in integrating Vision and Speech models and designing heuristic-based decision systems. You’ll help build intelligent, context-aware iOS experiences powered by cutting-edge on-device and cloud-based machine learning models.
This role bridges machine learning and product engineering, ensuring that AI features are not just technically sound but also deliver seamless, intuitive user experiences on iOS devices.
Key Responsibilities
- Design, build, and maintain iOS applications that integrate computer vision and speech recognition models.
 - Deploy and optimize Core ML, Vision, and Speech frameworks for real-time inference and low-latency performance.
 - Implement heuristic systems that enhance model outputs — e.g., post-processing results, confidence-based decisions, or hybrid rule/model logic.
 - Collaborate with AI research and backend teams to integrate multimodal ML models (vision, speech, and text).
 - Work closely with UX designers to translate model capabilities into delightful and efficient user experiences.
 - Conduct performance profiling and optimization for on-device inference, memory usage, and energy efficiency.
 - Contribute to CI/CD pipelines, testing frameworks, and automated evaluation of ML-integrated features.
 - Stay current with the latest Apple ML technologies (e.g., Core ML tools, Create ML, Neural Engine optimizations).
 
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.
 - 7–9 years of professional iOS development experience using Swift and SwiftUI/UIKit.
 - Proven experience with Core ML, Vision, and Speech frameworks.
 - Familiarity with ML model conversion and optimization for iOS (e.g., PyTorch/TensorFlow → Core ML).
 - Strong grasp of heuristics and rule-based systems, including hybrid model-rule pipelines.
 - Solid understanding of mobile performance optimization, asynchronous programming, and memory management.
 - Experience integrating APIs or SDKs for ASR (Automatic Speech Recognition) and Object/Scene Detection.
 
Preferred Qualifications
- Experience with on-device multimodal fusion (vision + speech + text).
 - Familiarity with Apple Neural Engine (ANE) optimizations.
 - Hands-on experience with Metal Performance Shaders (MPS) or accelerate frameworks.
 - Background in reinforcement learning or behavioral heuristics for user adaptation.
 - Understanding of privacy-preserving ML and on-device data pipelines.
 - Prior experience in building assistive, AR, or AI-driven interaction applications.
 - Strong expertise in SwiftUI, UIKit, Storyboards, Auto Layout, Core Data, Core Graphics, and Core Animation.
 - Familiar with performance tuning, memory management, app lifecycle, background threading, and battery/network optimization.
 
Soft Skills
- Strong problem-solving and analytical skills.
 - Collaborative mindset with experience working across AI, design, and product teams.
 - Passion for building intelligent, human-centric mobile experiences.
 - Self-driven with attention to performance, reliability, and scalability.
 
Tools & Technologies
- Languages: Swift, Objective-C (optional), Python (for model preparation)
 - Frameworks: Core ML, Vision, Speech, AVFoundation, Combine, SwiftUI
 - Tools: Xcode, Core ML Tools, MLModelC, Metal, Git, Jira
 - ML Ecosystem: PyTorch, TensorFlow, ONNX, Core ML Converter