Job Description:
Position Description:
Implements solutions and improves development agility and productivity, using DevOps and Continuous Integration and Continuous Delivery (CI/CD) tools -- Maven, Jenkins, Stash, Ansible, and Docker. Collaborates with internal and external teams to deliver technology solutions for business needs using data architecture patterns -- Lambda, Kappa, Event Driven Architecture, Data as a Service, and Microservice. Delivers system automation by setting up CI/CD delivery pipelines using data movement technologies (ETL/ELT), REST APIs, and in-memory technologies. Uses business knowledge to translate the vision for divisional initiatives into business solutions by developing complex or multiple software applications and conducting studies of alternatives. Analyzes and recommends changes in project development policies, procedures, standards, and strategies to development experts and management.
Primary Responsibilities:
-
Participates in architecture design teams.
-
Defines and implements application-level architecture.
-
Develops applications on complex projects, components, and subsystems for the division.
-
Recommends development testing tools and methodologies and reviews and validates test plans.
-
Responsible for QA readiness of software deliverables.
-
Develops comprehensive documentation for multiple applications or subsystems.
-
Establishes full project life cycle plans for complex projects across multiple platforms.
-
Responsible for meeting project goals on-time and on-budget.
-
Advises on risk assessment and risk management strategies for projects.
-
Plans and coordinates project schedules and assignments for multiple projects.
-
Acts as a primary liaison for business units to resolve various project/technology issues.
-
Provides technology solutions to daily issues and technical evaluation estimates on technology initiatives.
-
Advises senior management on technical strategy.
-
Mentors junior team members.
-
Performs independent and complex technical and functional analysis for multiple projects supporting several divisional initiatives.
-
Develops original and creative technical solutions to on-going development efforts.
Education and Experience:
Bachelor’s degree (or foreign education equivalent) in Computer Science, Computer Information Systems, Engineering, Information Technology, Information Systems, Mathematics, Physics, or a closely related field and five (5) years of experience as a Principal Software Engineer/Developer (or closely related occupation) developing big data solutions and streaming applications on-premises and in Cloud -- Amazon Web Services (AWS) and Azure.
Or, alternatively, Master’s degree (or foreign education equivalent) in Computer Science, Computer Information Systems, Engineering, Information Technology, Information Systems, Mathematics, Physics, or a closely related field and three (3) years of experience as a Principal Software Engineer/Developer (or closely related occupation) developing big data solutions and streaming applications on-premises and in Cloud -- Amazon Web Services (AWS) and Azure.
Skills and Knowledge:
Candidate must also possess:
-
Demonstrated Expertise (“DE”) performing Big Data Engineering -- designing, developing, and testing distributed, scalable, big data processing, and near real-time analytics platforms, solutions, and data pipelines; and streaming data lakes for on-premise, AWS, and Azure Cloud environments, using Java, Python, ETL/ELT jobs, and AWS services (SNS/SQS, DynamoDB, Batch, Kinesis, and Lambda or Step Functions).
-
DE performing DevOps -- project planning, implementing, and testing Continuous Integration/Continuous Delivery (CI/CD) pipelines to provision Infrastructure as a Code (IaC), using CloudFormation, uDeploy, Docker container, ECS, Azure DevOps Pipelines, and Git -- in development, quality assurance, and production enterprise environments.
-
DE performing enterprise architecture and security -- designing, documenting, and implementing highly scalable, available, disaster recovery, secure, multi-tenant RBAC enterprise self-service Big Data and streaming platforms and applications, using OAuth, AWS Well-Architected Framework, and AWS Key Management Service for security, reliability, and availability of customer data.
-
DE performing performance engineering -- performing performance characterization and fine-tuning Big Data platforms and near real time distributed applications for high throughput and reliability; and performing recovery tuning of Spring Web and Boot configurations, containers, instance type, and workflow queues, and auto scaling microservices and observability -- using Splunk, Datadog, and CloudWatch metrics (for customer use cases).
#PE1M2
Certifications:
Category:Information Technology
Fidelity’s hybrid working model blends the best of both onsite and offsite work experiences. Working onsite is important for our business strategy and our culture. We also value the benefits that working offsite offers associates. Most hybrid roles require associates to work onsite every other week (all business days, M-F) in a Fidelity office.
Top Skills
What We Do
At Fidelity, our goal is to make financial expertise broadly accessible and effective in helping people live the lives they want. We do this by focusing on a diverse set of customers: - from 23 million people investing their life savings, to 20,000 businesses managing their employee benefits to 10,000 advisors needing innovative technology to invest their clients’ money. We offer investment management, retirement planning, portfolio guidance, brokerage, and many other financial products.
Privately held for nearly 70 years, we’ve always believed by providing investors with access to the information and expertise, we can help them achieve better results. That’s been our approach- innovative yet personal, compassionate yet responsible, grounded by a tireless work ethic—it is the heart of the Fidelity way.