26-2253: Data Engineer, AI/BI: Hyderabad, India

Posted 16 Days Ago
Be an Early Applicant
Hyderabad, Telangana, IND
In-Office
Mid level
Software • Consulting • Cybersecurity
The Role
The Data Engineer will design and maintain AI data pipelines, manage cloud data infrastructure, and ensure data quality for analytics and machine learning operations.
Summary Generated by Built In
Data Engineer — AI / BI
Artificial Intelligence & Business Intelligence  |  Data & Analytics
Job ID #: 26-2253
Who We Are
:
Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.
What We Do:
At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.

Position Overview
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data flow across the enterprise, ensure operational excellence, and provide the backbone for strategic decision-making, predictive modeling, and innovation

Responsibilities:
Data & AI Platform Engineering (Databricks-Centric):
  • Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles.
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers.
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation.
  • Apply schema evolution and data versioning to support agile data development.
Platform Integration & Data Ingestion:
  • Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks.
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data.
  • Design standardized data ingestion processes with automated error handling, retries, and alerting.
 Data Quality, Monitoring, and Governance:
  • Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers.
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures.
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement.
Security, Privacy, and Compliance:
  • Enforce data security best practices including row-level security, encryption at rest/in transit, and fine-grained access control via Unity Catalog.
  • Design and implement data masking, tokenization, and anonymization for compliance with privacy regulations (e.g., GDPR, FERPA).
  • Work with security teams to audit and certify compliance controls.
    AI/ML-Ready Data Foundation:
  • Enable data scientists by delivering high-quality, feature-rich data sets for model training and inference.
  • Support AIOps/MLOps lifecycle workflows using MLflow for experiment tracking, model registry, and deployment within Databricks.
  • Collaborate with AI/ML teams to create reusable feature stores and training pipelines.
Cloud Data Architecture and Storage:
  • Architect and manage data lakes on Azure Data Lake Storage (ADLS) or Amazon S3, and design ingestion pipelines to feed the bronze layer.
  • Build data marts and warehousing solutions using platforms like Databricks.
  • Optimize data storage and access patterns for performance and cost-efficiency.
 Documentation & Enablement:
  • Maintain technical documentation, architecture diagrams, data dictionaries, and runbooks for all pipelines and components.
  • Provide training and enablement sessions to internal stakeholders on the Databricks platform, Medallion Architecture, and data governance practices.
  • Conduct code reviews and promote reusable patterns and frameworks across teams.
   Reporting and Accountability:
  • Submit a weekly schedule of hours worked and progress reports outlining completed tasks, upcoming plans, and blockers.
  • Track deliverables against roadmap milestones and communicate risks or dependencies.

Required Qualifications:
  • Hands-on experience with Databricks, Delta Lake, and Apache Spark for large-scale data engineering.
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments.
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments.
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic.
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms.
  • Familiarity with data governance, lineage tracking, and metadata management tools.

Preferred Qualifications:
  • Prior UMGC or USM experience preferred.
  • Experience with Databricks Unity Catalog for metadata management and access control.
  • Experience deploying ML models at scale using MLFlow or similar MLOps tools.
  • Familiarity with cloud platforms like Azure or AWS, including storage, security, and networking aspects.
  • Knowledge of data warehouse design and star/snowflake schema modeling.

Skills Required

  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, Statistics, or related field
  • 4+ years of experience in a data engineering role
  • 1-2 years focused on AI/ML data infrastructure or BI environments
  • Proficiency in SQL
  • Expertise in Python for data engineering (PySpark, Pandas, SQL Alchemy)
  • Hands-on experience with a modern cloud data warehouse (Snowflake, Big Query, Databricks)
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
Columbia, , Maryland
84 Employees
Year Founded: 2006

What We Do

Incorporated in 2006, Navitas Business Consulting Inc, is a Woman-Owned, Small Business (WOSB) with areas of expertise in Cloud Migration, Data & Insights, Artificial Intelligence, Threat Intelligence, Cybersecurity, Agile PMO & Advisory and Healthcare. Specializing in Federal, Commercial, and State & Local customers. We are proud to be recognized as the “Top Places to Work” by Washington Post from 2018-2022, and 2024!

Similar Jobs

MetLife Logo MetLife

Platform Engineer

Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Hybrid
Hyderabad, Telangana, IND
43000 Employees

Wise Logo Wise

FinCrime Operations Senior Lead-AML Investigations

Fintech • Mobile • Payments • Software • Financial Services
Hybrid
Hyderabad, Telangana, IND
8000 Employees

Wise Logo Wise

Engineering Lead - People Technology

Fintech • Mobile • Payments • Software • Financial Services
Hybrid
Hyderabad, Telangana, IND
8000 Employees

Micron Technology Logo Micron Technology

Senior Engineer

Artificial Intelligence • Hardware • Information Technology • Machine Learning
In-Office
Hyderabad, Telangana, IND
45000 Employees
2-2 Annually

Similar Companies Hiring

Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees
Onshore Thumbnail
Artificial Intelligence • Fintech • Software • Financial Services
New York City, NY
100 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account