Discover Technata Job board

Find your next tech job in Kanata North, Canada’s largest technology park. Then explore endless international opportunities and dream about where your career will take you. With the Country’s largest density of technology companies ranging from promising startups to leading global giants, Kanata North is the place to be if you are serious about a career in tech.

Lead Data Engineer

Honeywell

Honeywell

Software Engineering, Data Science
Atlanta, GA, USA
Posted on Dec 4, 2025

The Data Engineer will be responsible for developing and implementing data-driven solutions and products that optimize operations, enhance efficiency, and drive growth for Honeywell’s Industrial customers. This role focuses on building AI data products using IoT data, handling large-scale streaming and telemetry data, and deploying data pipelines for AI products.

Scope: The role involves collaborating with stakeholders, data scientists, and product teams to create data products that serve analytics solutions and AI/ML needs. It includes implementing data models, data pipelines which integrates diverse data sources, and building analytic solutions leveraging AI

Challenges: The Data Engineer will face challenges such as managing and processing huge volumes of streaming data, ensuring data quality, and implementing efficient solutions while working in a fast-paced environment with ambiguous requirements.

Opportunities: This role offers the opportunity to work on cutting-edge AI projects, leveraging best in class data platforms, develop innovative data products, that transform of industrial operations. Professional Growth opportunity while working alongside a global team of data engineers and ML experts to drive manufacturing innovation and operational excellence.

Joining Honeywell’s data engineering team means being part of a high-performing global team that delivers innovative AI/ML data products for industrial customers. You will have the opportunity to work on challenging projects, leverage the latest AI technologies, and make a significant impact on optimizing operations and driving growth for our customers. The role offers professional growth, collaboration with experts, and the chance to be at the forefront of AI-driven industrial solutions.


Honeywell helps organizations solve the world's most complex challenges in automation, the future of aviation and energy transition. As a trusted partner, we provide actionable solutions and innovation through our Aerospace Technologies, Building Automation, Energy and Sustainability Solutions, and Industrial Automation business segments – powered by our Honeywell Forge software – that help make the world smarter, safer and more sustainable.
Joining Honeywell’s data engineering team means being part of a high-performing global team that delivers innovative AI/ML data products for industrial customers. You will have the opportunity to work on challenging projects, leverage the latest AI technologies, and make a significant impact on optimizing operations and driving growth for our customers. The role offers professional growth, collaboration with experts, and the chance to be at the forefront of AI-driven industrial solutions.

US PERSON REQUIREMENTS:

Due to compliance with US export control laws and regulations, candidate must be a US Person which is defined as a US citizen, US permanent resident, or have protected status In the US under asylum or refugee status or have the ability to obtain an export authorization.

Required Competencies:

  • Strong experience in data engineering concepts like CDC, ELT/ETL workflows, streaming replication, and data quality frameworks
  • Expertise in data modeling (dimensional, data vault), modern data lake architectures (medallion, delta), and practical experience with schema evolution strategies
  • Past experience handling high-volume IoT/telemetry data streams using technologies like Apache Kafka, Azure Event Hubs, or similar.
  • Proficiency in programming languages such as Scala or PySpark and Python.
  • Experience in building and deploying data pipelines for AI products.
  • Familiarity with cloud platforms like Databricks and Azure/GCP

Work Experience:

  • 5 years of data engineering experience.
  • 2 years of experience in programming with Scala or PySpark.
  • 2 years of experience in analyzing and modeling large-scale datasets.

Preferred Competencies:

  • Experience with complex SQL queries and large-scale data analytics solutions.
  • Knowledge of Agile and Scrum methodologies.
  • Expertise in version control systems and CI/CD methodologies.
  • Working knowledge of NoSQL/Graph systems and containerization technologies like Docker and Kubernetes.
  • Familiarity with GenAI and ML concepts.

Key Responsibilities

  • Lead the design, development, and implementation of data engineering solutions
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions
  • Design and implement data pipelines and ETL processes
  • Ensure the performance, availability, and security of data platforms