Manager - Data Engineering

Posted Yesterday
Be an Early Applicant
2 Locations
Hybrid
Mid level
Fintech • Financial Services
The Role
Manage data engineering functions, including the development and deployment of data processing technologies for analytics and machine learning, while leading a team and maintaining system performance.
Summary Generated by Built In

Why GMF Technology?

Innovation isn’t just a talking point at GM Financial, it’s how we operate. From generative AI and cloud-native technologies to peer-led learning and hackathons, our tech teams are building real solutions that make a difference. We’re committed to AI-powered transformation, using advanced machine learning and automation to help us reimagine customer interactions and modernize operations, positioning GM Financial as a leader in digital innovation within a dynamic industry.

Join us and discover a workplace where your ideas matter, your development is prioritized, and you can truly make a global impact.

Responsibilities

About The Role:

We are expanding our efforts into complementary data technologies for analytics and decision support in areas of ingesting and processing large data sets.  Our interests are in enabling data science and search based applications on large and low latent data sets in both a batch and streaming context for processing.  To that end, this role will engage with team counterparts in exploring, developing and deploying technologies for creating data sets using a combination of batch and streaming transformation processes.  These data sets support both off-line and in-line machine learning training and model execution.  Other data sets support search engine based analytics.  Exploration and deployment of technologies activities include identifying opportunities that impact business strategy, selecting data solutions software, and defining hardware requirements based on business requirements.  Responsibility also includes coding, testing, and documentation of new or modified scalable analytic data systems including automation for deployment and monitoring.  This role participates along with team counterparts to architect an end-to-end framework developed on a group of core data technologies.  Other aspects of the role include developing standards and processes for data engineering projects and initiatives.

  • Evaluate, research, experiment with data engineering technologies in a lab to keep pace with industry innovation while assessing business impact and viability for use cases associated with efforts in hand
  • Work with data engineering related groups to inform on and showcase capabilities of emerging technologies and to enable the adoption of these new technologies and associated techniques
  • Define and refine processes and procedures for the data engineering practice
  • Work closely with data scientists, data architects, ETL developers, other IT counterparts, and business partners to identify, capture, collect, and format data from the external sources, internal systems, and the data warehouse to extract features of interest
  • Code, test, deploy, monitor, document, and troubleshoot data engineering processing and associated automation
  • Define data engineering architecture both hardware and software reflective of business requirements to be included in end-to-end solution architecture
  • Educate and develop ETL developers on data engineering so as to enable transition to data engineer and practice
  • Conduct code reviews, suggest improvements, support technology upgrades for the common libraries, handover them to the corresponding development teams for quality check and support them till deployment into production
  • Support ETL developers and Operations teams to troubleshooting of the incidents for root cause analysis and assist in solutioning to meet the service level agreements
  • Work with Operations teams in Big Data, IT and Information Security with monitoring and troubleshooting of incidents to maintain service levels
  • Contribute to the evolving distributed systems architecture to meet changing requirements for scaling, reliability, performance, manageability, and cost
  • Report utilization and performance metrics to user communities
  • Contributes to planning and implementation of new/upgraded hardware and software releases
  • Responsible for monitoring the Linux, Hadoop, and Spark communities and vendors and report on important defects, feature changes, and or enhancements to the team
  • Research and recommend innovative, and where possible, automated approaches for administration tasks
  • Identify approaches to efficiencies in resource utilization, provide economies of scale, and simplify support issues
Qualifications

What Makes You A Dream Candidate?

  • Strong working knowledge of Hadoop and Spark cluster security, networking connectivity and IO throughput along with other factors that affect distributed system performance
  • Strong working knowledge of disaster recovery, incident management, and security best practices
  • Working knowledge of containers (e.g., docker) and major orchestrators (e.g., Mesos, Kubernetes, Docker Datacenter)
  • Working knowledge of automation tools (e.g., Puppet, Chef, Ansible)
  • Working knowledge of software defined networking
  • Working knowledge of parcel based upgrades with Hadoop (i.e., Cloudera)
  • Working knowledge of hardening Hadoop with Kerberos, TLS, and HDFS encryption
  • Working knowledge with directed analytic graph stream processing using Beam, Flink, Nifi and/or Samza
  • Excellent knowledge of Linux, AIX, or other Unix flavors
  • Working knowledge of Cloud based implementations (e.g., Microsoft Azure) with emphasis on security using ACLs and Artifactory Groups
  • Ability to accept change and to adapt to shifting organizational challenges and priorities
  • Ability to coach, develop and lead others
  • Ability to evaluate problems and issues quickly, and to make recommendations for courses of action
  • Ability to make independent decisions and use sound judgment in relation to the management of team members
  • Ability to prioritize tasks and ensure their completion in a timely manner
  • Excellent analytical and troubleshooting skills
  • Strong interpersonal, verbal and written skills

Experience and Education:

  • 5-7 years experience with software engineering to include Java, Scala, and Python required
  • 5-7 years proficiency with processing large data sets with Kafka, RabbitMQ, Flume, Hadoop, HBase, Cassandra and/or Spark or similar distributed system required
  • 3-5 years hands-on experience with scripting with Bash, Perl, Ruby required
  • 3-5 years hands-on development / processing experience on Kafka, HBase, Solr, and Hue required
  • 2-4 years hands-on experience with ETL and Business Intelligence technologies such as Informatica, DataStage, Ab Initio, Cognos, BusinessObjects, or Oracle Business Intelligence required
  • 2-3 years hands-on experience with SQL, data modeling, and relational databases such as Oracle, DB2, and Postgres required
  • Proven track record with NoSQL data stores such as MongoDB, Cassandra, HBase, Redis, Riak or other technologies that embed NoSQL with search such as MarkLogic or Lily Enterprise required
  • 0-2 years management experience with data engineering team preferred
  • High School Diploma or equivalent required
  • Bachelor’s Degree in related field or equivalent work or military experience required

What We Offer: Generous benefits package available on day one to include: 401K matching, bonding leave for new parents (12 weeks, 100% paid), tuition assistance, training, GM employee auto discount, community service pay and nine company holidays.

Our Culture: Our team members define and shape our culture — an environment that welcomes innovative ideas, fosters integrity, and creates a sense of community and belonging. Here we do more than work — we thrive.

Compensation: Competitive pay and bonus eligibility

Work Life Balance: Flexible hybrid work environment, 3 days a week in office

 

 

 

 

#LI-hybrid

#LI-KC1

#GMFjobs


Top Skills

Ansible
Cassandra
Chef
Docker
Etl Technologies
Flume
Hadoop
Hbase
Java
Kafka
Kubernetes
Azure
NoSQL
Puppet
Python
RabbitMQ
Scala
Spark
SQL
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Fort Worth, TX
7,790 Employees
Year Founded: 1992

What We Do

GM Financial is the captive finance company and the wholly owned subsidiary of General Motors and is headquartered in Fort Worth, Texas. The company is a global provider of auto finance solutions, with operations in North America, Latin America and China. Through our long-standing relationships with auto dealers, we offer attractive retail loan and lease programs to meet the needs of each customer. We also offer commercial lending products to dealers to help them finance and grow their businesses. GM Financial employs more than 9,000 hard-working team members, and we're always looking for new people with diverse talents. GM Financial is a workplace where dedicated people have the opportunity to work together and celebrate our successes. Our culture is based on respect, integrity, innovation and personal development. GM Financial is committed to strengthening the communities where we live and work. Each year, we select several philanthropic organizations to support through our Signature Events program. The company and its team members actively support these organizations through many company-wide initiatives; in addition we support numerous other nonprofit organizations through sponsorships and monetary donations.

Similar Jobs

Capital One Logo Capital One

Senior Manager, Data Engineering (Intelligent Foundations and Experiences)

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
3 Locations
55000 Employees
209K-262K Annually

Capital One Logo Capital One

Senior Manager, Data Science - AI Software Engineering

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
4 Locations
55000 Employees
209K-286K Annually

Afresh Logo Afresh

Data Engineering Manager

Artificial Intelligence • Machine Learning • Retail • Social Impact • Software
Easy Apply
Remote or Hybrid
U.S.
160 Employees
179K-243K Annually

Upshop Logo Upshop

Data Engineering Manager

Artificial Intelligence • eCommerce • Retail
Easy Apply
In-Office or Remote
Austin, TX, USA
95 Employees

Similar Companies Hiring

Granted Thumbnail
Mobile • Insurance • Healthtech • Financial Services • Artificial Intelligence
New York, New York
23 Employees
Scotch Thumbnail
Artificial Intelligence • eCommerce • Fintech • Payments • Retail • Software • Analytics
US
35 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account