Addepto is a leading consulting and technology company specializing in AI and Big Data, helping clients deliver innovative data projects. We partner with top-tier global enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. Our exclusive focus on AI and Big Data has earned us recognition by Forbes as one of the top 10 AI companies.
As a Data Engineer (Mid/Senior), you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join:
- Design of the data transformation and following data ops pipelines for global car manufacturers. This project aims to build a data processing system for both real-time streaming and batch data. We’ll handle data for business uses like process monitoring, analysis, and reporting, while also exploring LLMs for chatbots and data analysis. Key tasks include data cleaning, normalization, and optimizing the data model for performance and accuracy.
- Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes as well as giving freedom for technology selection.
- Design and development of the data platform for managing electric and hybrid vehicle data. This project involves building a robust data processing pipeline for large volumes of electric vehicle data. We’re working with thousands of signals collected in short timeframes, requiring efficient streaming and batch services. The data undergoes transformations for IoT applications, powering multiple use cases: business intelligence, customer support, vehicle maintenance, operational analysis, and even AI-driven insights. This is a chance to work with cutting-edge technology and make a tangible impact on the future of electric mobility.
Discover our perks and benefits:
- Work in a supportive team of passionate enthusiasts of AI & Big Data.
- Engage with top-tier global enterprises and cutting-edge startups on international projects.
- Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces.
- Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences.
- Choose from various employment options: B2B, employment contracts, or contracts of mandate.
- Make use of 20 fully paid days off available for B2B contractors and individuals under contracts of mandate.
- Participate in team-building events and utilize the integration budget.
- Celebrate work anniversaries, birthdays, and milestones.
- Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching.
- Get full work equipment for optimal productivity, including a laptop and other necessary devices.
- With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups.
- Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture.
In this position, you will:
- Lead and design scalable data management architectures, infrastructure, and platform solutions for streaming and batch processing using Big Data technologies like Snowflake and Airflow.
- Design and implement data management and data governance processes and best practices.
- Contribute to the development of CI/CD and MLOps processes.
- Develop applications to aggregate, process, and analyze data from diverse sources.
- Collaborate with the Data Science team on Machine Learning projects, including text/image analysis and predictive model building.
- Translate business requirements into technical solutions and ensure optimal performance and quality.
What you’ll need to succeed in this role:
- 6+ years of proven commercial experience implementing, developing, or maintaining Big Data systems.
- Strong programming skills in Python: writing a clean code, OOP design
- Experience in designing and implementing data governance and data management processes.
- Hands-on experience with Big Data technologies like Snowflake, Airflow, Druid, Iceberg, Kubernetes, Databricks, or Dagster.
- Proven expertise in implementing and deploying solutions in cloud environments (with a preference for Azure and AWS).
- Excellent understanding of dimensional data and data modeling techniques.
- Excellent communication skills and consulting experience with direct interaction with clients.
- Ability to work independently and take ownership of project deliverables.
- Master’s or Ph.D. in Computer Science, Data Science, Mathematics, Physics, or a related field.
Nice to have: Spark, Hadoop, NiFi, Kafka