Data Engineer II
Honeywell
Data Engineer-II
Location: Bangalore (Hybrid)
Join a team that is elevating our strategy to drive advanced analytics and visualization tools across the Value Engineering COE. In this role, Data Engineer II , you will design, implement, and manage the data architecture, systems, and processes to effectively collect, store, process and analyze high volume, high dimensional data to provide strategic insight into complex business problems. This will involve creating and maintaining scalable, efficient, and secure data pipelines, data warehouses, and data lakes. You need to ensure consistency in data quality and availability for analysis and reporting including compliance with data governance and security standards.
Honeywell helps organizations solve the world's most complex challenges in automation, the future of aviation and energy transition. As a trusted partner, we provide actionable solutions and innovation through our Aerospace Technologies, Building Automation, Energy and Sustainability Solutions, and Industrial Automation business segments – powered by our Honeywell Forge software – that help make the world smarter, safer and more sustainable.
As a Data Engineer II here at Honeywell, you will design, develop, and maintain data infrastructure, manage data pipelines, and support data-driven decision-making. Collaborate with cross-functional teams to optimize data workflows and drive innovation.
YOU MUST HAVE
- 4 + years of relevant experience in Data Engineering, ETL Development, Database Administration.
- Experience in Azure Data Factory, Azure DW, Databricks, and APIs
- Expertise in querying and programming languages, such as Python, SQL, PySpark
- Experience with Structured data. Knowledge of unstructured data is not needed but valued.
- Experience in Snowflake
- Knowledge of Agile development methodology
WE VALUE
- Knowledge of databases, data warehouse platforms (Snowflake) and Cloud based tools.
- Understanding of Data Mining concepts & applications such as Classification or Predictive algorithms.
- Experience in using data integration tools for ETL processes.
- Knowledge of Data Modelling techniques including schema design for relational databases
- Experience with Python packages such as: polars, scikit – especially those packages linked to data processing & transformation.
- Ability to develop and communicate technical vision for projects and initiatives that can be understood by customers and management.
Duties and Responsibilities
- Work in complex data science and analytics projects in support of the Value Engineering organization.
- Understand and translate business requirements into technical solutions, leveraging available and new technologies as required.
- Design and implement data models and schemas to support analytical and reporting requirements.
- Collaborate with data scientists and data analysts to define and structure data for effective analysis and reporting.
- Develop and maintain ETL (Extract, Transform, Load) processes.
- Administer, optimize, and manage databases, data warehouses, and data lakes to ensure performance, reliability, and scalability.
- Enforce data governance policies, standards, and best practices to maintain data quality, privacy, and security.
- Create and maintain comprehensive documentation for data architecture, processes, and systems.
- Troubleshoot and resolve data-related problems and optimize system performance.
- Partner with IT support team on production processes, continuous improvement, and production deployments.