Job Title: Data Engineer (3-6 Years of Experience)
Location: [Bangalore/Hyderabad]
Job Type: Full-time
About the Role:
We are looking for a skilled Data Engineer with 3-6 years of experience in big data technologies, particularly Java, Apache Spark, SQL and data lakehouse architectures. The ideal candidate will have a strong background in building scalable data pipelines and experience with modern data storage formats, including Apache Iceberg. You will work closely with cross-functional teams to design and implement efficient data solutions in a cloud-based environment.
Key Responsibilities:
- Data Pipeline Development:
- Design, build, and optimize scalable data pipelines using Apache Spark.
- Implement and manage large-scale data processing solutions across data lakehouses.
- Data Lakehouse Management:
- Work with modern data lakehouse platforms (e.g.Apache Iceberg) to handle large datasets.
- Optimize data storage, partitioning, and versioning to ensure efficient access and querying.
- SQL & Data Management:
- Write complex SQL queries to extract, manipulate, and transform data.
- Develop performance-optimized queries for analytical and reporting purposes.
- Data Integration:
- Integrate various structured and unstructured data sources into the lakehouse environment.
- Work with stakeholders to define data needs and ensure data is available for downstream consumption.
- Data Governance and Quality:
- Implement data quality checks and ensure the reliability and accuracy of data.
- Contribute to metadata management and data cataloging efforts.
- Performance Tuning:
- Monitor and optimize the performance of Spark jobs, SQL queries, and overall data infrastructure.
- Work with cloud infrastructure teams to optimize costs and scale as needed.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- 3-6 years of experience in data engineering, with a focus on Java, Spark and SQL Programming languages.
- Hands-on experience with Apache Iceberg, Snowflake, or similar technologies.
- Strong understanding of data lakehouse architectures and data warehousing principles.
- Proficiency in AWS data services.
- Experience with version control systems like Git and CI/CD pipelines.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Nice to Have:
- Experience with containerization (Docker, Kubernetes) and orchestration tools like Airflow.
- Certifications in AWS cloud technologies
Top Skills
What We Do
Sigmoid is a leading data engineering and AI solutions company that helps enterprises gain a competitive advantage with effective data-driven decision-making. Our team is strongly driven by the passion to unravel data complexities. We generate actionable insights and translate them into successful business strategies.
We leverage our expertise in open-source and cloud technologies to develop innovative frameworks catering to specific customer needs. Our unique approach has positively influenced the business performance of our Fortune 500 clients across CPG, retail, banking, financial services, manufacturing, and other verticals.
Backed by Sequoia Capital, Sigmoid has offices in New York, San Francisco, Dallas, Lima, Amsterdam, and Bengaluru. We are recognized among the world's fastest growing and innovative tech companies, winning several awards and recognition like the Deloitte Technology Fast 500, Financial Times- The Americas’ Fastest Growing Companies, Inc. 5000, Great Place To Work- India’s Best Leaders in Times of Crisis, Data Breakthrough, Aegis Graham Bell, TiE50, NASSCOM Emerge 50, and others.
Learn more: https://www.sigmoid.com/ or https://sigmoid.com/careers/ for careers.