- 職務詳細
職務内容および応募要件
We’re looking for a hands-on Data Engineer who thrives in a fast-paced, cloud-native environment and is passionate about building scalable data infrastructure. In our IoT Analytics team, you’ll be at the heart of transforming raw device and business data into actionable insights that power our connected tools, software platforms, and advanced analytics services.
You’ll work across the full data lifecycle—from ingestion and transformation to API development and reporting—leveraging modern AWS technologies and infrastructure-as-code principles. This is a high-impact role where your work directly supports strategic decision-making, predictive analytics, and customer-facing applications.
Who is Hilti?
Hilti is the place where innovation is harnessed to improve productivity, safety, and sustainability in the global construction industry and beyond. It's where solutions are born based on strong customer relationships, making it possible to build a better future. It's where 34,000 people across 120 different locations worldwide take pride in being part of the team. It's where people have the opportunity to explore their possibilities, unleash their potential, take responsibility for their personal development, and build their careers for the long term.
Our international IoT Analytics team is the central hub for data engineering and AI services powering Hilti’s connected hardware and construction software. We work at the intersection of embedded systems, cloud platforms, and advanced analytics—covering everything from edge data to business intelligence.
What does the role involve?
As a Data Engineer in our team, you will:
- Design and build scalable data pipelines using Python, SQL, Spark, AWS Glue, Athena, and other AWS-native tools to process structured and semi-structured data from IoT devices and business systems.
- Ensure data quality and reliability through robust validation, monitoring, and unit testing (Pytest), ensuring data is accurate, consistent, and trustworthy across the entire pipeline.
- Develop and maintain APIs for data access and integration using Python, Go, AWS Lambda, and API Gateway, enabling seamless consumption of data across applications and services.
- Support real-time and streaming analytics, contributing to the design and implementation of our evolving stream processing architecture using AWS-native tools.
- Enable advanced analytics and ML workflows by preparing and transforming data for use in Jupyter notebooks, AWS SageMaker, and other analytical environments.
- Collaborate with BI developers to deliver impactful Power BI dashboards and ensure data availability and usability for reporting and decision-making.
- Implement infrastructure as code using Terraform to automate and manage cloud resources efficiently and securely.
- Drive innovation by exploring new technologies, recommending improvements, and contributing to architectural decisions that enhance scalability and performance.
- Bridge technical and business domains, working closely with both engineering teams and business stakeholders to understand requirements, translate them into data solutions, and ensure alignment with strategic goals.
What do we offer?
Show us what you're made of and we'll offer you opportunities to move around the business - to work abroad, experience different job functions and tackle different markets. It's a great way to find the right match for your ambition and achieve the exciting career you're after.
We have a very thorough people review process, unlike any we know of in other businesses. We can pair talents with opportunities - developing our people in their current roles or challenging them to work in new places. It is how we find the right fit, further our teams personally and professionally, get the best value for each employee and increase the job satisfaction. Additionally, we also offer you a wide range of benefits.
Why should you apply? What you need is
Become a valuable member of our highly professional and international team of software experts[BJ1] and data scientists and meet the challenges of a global multinational company using latest technologies. You will have the freedom to act in the responsible area with career prospects in a dynamic environment, excellent opportunities to develop yourself to higher levels and wider range of knowledge Furthermore, 80% of our top positions are filled internally. We have a clearly defined career development track for every individual employee and an excellent team whom are duly rewarded by performance.
What you need is
- Possess a Bachelor’s/Masters Degree in Computer Science/Information Technology, Engineering (Computer/Telecommunication), Science & Technology, or equivalent; preference given to those with a Master’s Degree.
- At least 5 years of hands-on experience in data engineering and cloud-based analytics solutions, with a proven track record of delivering scalable, production-grade systems.
- Strong proficiency in Python and SQL, with experience designing and maintaining ETL pipelines, optimizing query performance, and modeling data across platforms such as Athena, AuroraDB, and RDS.
- Solid understanding of AWS services including Glue, Lambda, API Gateway, DynamoDB, Athena, RDS, Kinesis, and SageMaker, and familiarity with infrastructure-as-code using Terraform.
- Demonstrated expertise in ensuring data quality and reliability through validation, monitoring, and robust testing practices (e.g., Pytest).
- Experience with Power BI and data visualization tools is a plus.
- Ability to quickly learn new technologies and understand business processes, with a strong problem-solving mindset and a proactive, self-driven approach.
- Excellent communication skills in English, with the ability to collaborate effectively across virtual teams and engage both technical and business stakeholders.