AI/ML Systems Research Intern
Nokia
Number of Position(s): 4
Duration: 10 Weeks
Date: June 2026 to August 2026
Location: Hybrid, in Murray Hill, NJ.
EDUCATIONAL RECOMMENDATIONS
Currently a candidate for a PhD in Computer Science, Computer Systems Engineering, Math, Artificial Intelligence, or related field at an accredited school in the USA.
- Flexible and hybrid working schemes to balance study, work, and life
- Professional development events and networking opportunities
- Well-being programs, including Personal Support Service 24/7 - a confidential support channel open to all Nokia employees and their families in challenging situations
- Opportunities to join Nokia Employee Resource Groups (NERGs) and build connections across the organization
- Employee Growth Solutions, mentorship programs, and coaching support for your career development
- A learning environment that fosters both personal growth and professional development – for your role and beyond
Join Nokia Bell Labs’ Decentralized Systems Research team as a summer intern in Murray Hill, NJ, and help shape the future of AI/ML systems. In this role, you’ll explore cutting-edge research focused on optimizing performance, scalability, and efficiency across diverse software and hardware environments. You’ll have the opportunity to collaborate with world-class researchers, contribute to new innovations, publish in top-tier venues, and even pursue patents. This internship offers a unique chance to expand your expertise while working at the forefront of decentralized and intelligent systems.
- Expertise in deep learning fundamentals, including large language models and agent-based systems, and experience with training, deploying, and/or profiling models.
- Experience in principled systems design and development.
- Excellent communication skills, with the ability to analyze complex problems and effectively communicate findings.
- Strong publication record in top-tier AI and systems conferences.
It would be nice if you also had:
We encourage applications from candidates who have a strong foundation in one or more of the areas below, even if you don’t meet every criterion. We value diverse perspectives, innovative thinking, and complementary skills.
Agentic AI & Large Language Models (LLMs):
Familiarity with large-scale model inference and optimization, as well as experience in LLM reasoning, prompt engineering, and resource-constrained computation.
AI Systems Architecture & Optimization:
Experience managing GPU or accelerator resources, optimizing performance, and benchmarking across different hardware environments. A solid understanding of AI infrastructure design and inference workflows—such as KV-cache management, batching, and offloading—is beneficial.
Compilers & Hardware–Software Co-Design:
Knowledge of computational graph representations (e.g., ONNX, MLIR, XLA, TorchScript) and model optimization frameworks (e.g., TensorRT, TVM). Experience working with heterogeneous accelerator ecosystems (e.g., TPUs, AMD ROCm GPUs) or parallelizing compilers is a plus.
Distributed, Edge AI & Web3 Computing:
Understanding of distributed or edge inference systems (e.g., Ray Serve, DeepSpeed-Inference, vLLM), with familiarity in blockchain technologies, smart contracts, or wireless networking protocols (Wi-Fi, 3GPP, Bluetooth).
Are you passionate about solving problems? As part of our team, you will:
- Design and implement state-of-the-art AI/ML decentralized systems.
- Validate and evaluate your implementation in our cutting-edge labs.
- Interface, explore, and learn from the experts.