At Nexus, we help organizations turn complex data into clear, actionable insight — so the hard work behind artificial intelligence, analytics, and cloud infrastructure doesn’t slow innovation down… it accelerates it.
We’re a team of builders, problem-solvers, and collaborators who believe in moving fast, learning continuously, and doing great work together. Our role is to handle the heavy lifting of modern data platforms, governance, and cloud transformation so our clients can focus on making confident, strategic decisions that move their businesses forward.
We’re proud of the culture we’re building across the U.S. and India — one rooted in curiosity, ownership, collaboration, and care — and we’re excited to keep growing with people who want to do meaningful work alongside genuinely great teammates.
Role Summary / PurposeThe Big Data Developer plays a key role in the modernization of data ecosystem — supporting the migration of legacy MAPR/Cloudera/Hortonworks applications to open-source frameworks compatible with NexusOne.
This individual will focus on refactoring, optimizing, and validating data processing pipelines, ensuring performance, scalability, and alignment with enterprise data standards.
The role requires strong technical expertise across distributed data systems, open-source frameworks, and hybrid data environments.
Analyze, refactor, and modernize Spark/MapReduce/Hive/Tez jobs for execution within NexusOne’s managed Spark and Trino environments.
Design, build, and optimize batch and streaming pipelines using Spark, NiFi, and Kafka.
Convert existing ETL jobs and DAGs from Cloudera/MAPR ecosystems to open-source equivalents.
Collaborate with Data Engineers and Architects to define new data ingestion and transformation patterns.
Tune performance across large-scale data processing workloads (partitioning, caching, resource allocation).
Implement data quality and validation frameworks to ensure consistency during migration.
Support code reviews, performance tests, and production readiness validation for migrated workloads.
Document conversion approaches, dependencies, and operational runbooks.
Partner with application SMEs to ensure domain alignment and business continuity.
Core Frameworks: Apache Spark, PySpark, Airflow, NiFi, Kafka, Hive, Iceberg, Oozie
Programming Languages: Python, Scala, Java
Data Formats & Storage: Parquet, ORC, Avro, S3, HDFS
Orchestration & Workflow: Airflow, DBT
Performance Optimization: Spark tuning, partitioning strategies, caching, YARN/K8s resource tuning
Testing & Validation: Great Expectations, Deequ, SQL-based QA frameworks
Observability & Monitoring: Datadog, Grafana, Prometheus
4–8 years of experience in big data engineering or application modernization in enterprise settings.
Prior experience with Cloudera, MAPR, or Hadoop ecosystems, transitioning to open-source frameworks.
Strong understanding of distributed data architectures and data lake design principles.
Exposure to hybrid or cloud-native environments (AWS, GCP, or Azure).
Familiarity with regulated environments (financial services, telecom, healthcare) is a plus.
Successful refactoring and execution of legacy data pipelines within NexusOne environments.
Measurable performance improvements (execution time, cost optimization, data quality metrics).
Delivered migration artifacts — including conversion patterns, reusable scripts, and playbooks.
Positive feedback from Wells Fargo application owners on migration support and knowledge transfer.
Consistent adherence to coding standards, documentation, and change management practices.
At Nexus, we value people who want to grow — and support each other while doing so.
You can expect:
A collaborative team culture built on curiosity and respect
Challenging work where your contributions clearly matter
A leadership team that invests in learning and development
The opportunity to work at the intersection of cloud, data, and AI innovation
If this role sounds like a great fit — or even close to one — we’d love to hear from you. We know that no candidate checks every single box, and we’re excited to meet people who bring curiosity, talent, and a desire to build meaningful work together.
Top Skills
What We Do
Nexus Cognitive takes enterprises from data to outcomes at unprecedented speed and scale. We’ve revolutionized the way enterprises get value from their data with a composable, agnostic framework that enables you to rapidly build new solutions with modular, pre-integrated data components. Built with open standards, we give you the freedom to work with the data, systems, and toolsets of your choice, while our universal data catalog provides robust governance to ensure compliance and cut risks.
Through close customer collaboration, we design solutions that connect data pipelines and increase data access from on-prem to multi-cloud, with complete visibility across the data ecosystem. Cut data complexity, get AI-ready, and prove ROI in weeks, not months with the fully managed data framework and outcomes from Nexus Cognitive.


.png)





