About the Role
We’re looking for a Mobile AI Engineer with deep experience in native mobile development (iOS or Android) and strong understanding of computer vision, speech recognition, and heuristic-driven systems.
In this role, you’ll design and implement next-generation AI-powered mobile experiences — enabling users to interact through vision, speech, and context. You’ll work closely with AI researchers, backend engineers, and designers to bring real-time intelligence to mobile devices using on-device ML pipelines and hybrid decision systems.
Key Responsibilities
- Build and maintain mobile applications (iOS and/or Android) that integrate vision, speech, and heuristic intelligence.
- Deploy and optimize on-device ML models (e.g., Core ML, TensorFlow Lite, MediaPipe, ML Kit) for real-time inference.
- Implement heuristic and rule-based systems that enhance model reliability, adapt to user context, and improve decision quality.
- Collaborate with AI teams to convert and fine-tune multimodal ML models (vision + speech + text) for mobile deployment.
- Integrate camera, microphone, and sensor data pipelines to support contextual, real-time user interactions.
- Optimize for low latency, battery efficiency, and memory footprint across platforms.
- Partner with designers and researchers to transform AI capabilities into intuitive, human-centric mobile experiences.
- Contribute to CI/CD pipelines, automated testing, and performance monitoring for AI-driven features.
- Stay current with the latest Apple, Google, and open-source AI/ML toolchains for mobile applications.
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.
- 8+ years of experience in native mobile development — proficient in Swift (iOS) and/or Kotlin/Java (Android).
- Hands-on experience with Core ML, TensorFlow Lite, MediaPipe, or ML Kit.
- Proven background in computer vision (object detection, image segmentation) and/or speech models (ASR, TTS).
- Familiarity with model optimization, quantization, and on-device inference workflows.
- Understanding of heuristic and rule-based systems, and how they can complement ML predictions.
- Strong grasp of mobile performance tuning, asynchronous programming, and data streaming from camera/audio sources.
Preferred Qualifications
- Experience working with multimodal AI systems (combining vision, speech, text, and context).
- Familiarity with ONNX Runtime, Neural Networks API (NNAPI), and Apple Neural Engine (ANE).
- Experience with AR/VR interfaces, assistive technologies, or context-aware applications.
- Knowledge of reinforcement learning or behavioral heuristics for user personalization.
- Background in privacy-preserving ML, federated learning, or on-device analytics.
- Prior contributions to cross-platform AI SDKs or AI feature frameworks for mobile.
- Strong expertise in SwiftUI, UIKit, Storyboards, Auto Layout, Core Data, Core Graphics, and Core Animation.
- Deep understanding of Android architecture components (MVVM/MVI), Android Jetpack libraries, Coroutines, and modern UI principles.
- Familiar with performance tuning, memory management, app lifecycle, background threading, and battery/network optimization.
Soft Skills
- Strong problem-solving ability with a data-informed mindset.
- Excellent communication and collaboration skills across disciplines (AI, UX, product).
- Passionate about building intelligent, adaptive, and ethical AI experiences for mobile users.
- Self-starter with a focus on quality, reliability, and innovation.
Tools & Technologies
| Category | Tools |
| Languages | Swift, Kotlin, Java, Python |
| Frameworks | Core ML, TensorFlow Lite, MediaPipe, ML Kit, Vision, Speech |
| APIs | CameraX, AVFoundation, SpeechRecognizer, Combine, WorkManager |
| ML Tooling | Core ML Tools, TFLite Converter, ONNX, PyTorch Mobile |
| Dev Tools | Xcode, Android Studio, Gradle, Git, Firebase, Jira |
| Performance | Instruments, Android Profiler, MLModelC, NNAPI, ANE |