Sr Software Engineer at The Walt Disney Company (San Francisco, CA)
Disney Streaming is a premium streaming TV destination that seeks to captivate and connect viewers with the stories they love. We create amazing experiences that celebrate the best of entertainment and technology. We're looking for great people who are passionate about redefining TV through innovation, unconventional thinking and embracing fun. It's a mission that takes some serious smart, intense curiosity and determination to be the best. Come be part of the team that's powering play.
The Big Data Infrastructure team is seeking a Senior Software Engineer who will be an exceptional addition to our Big Data Infrastructure team. The right person for this role should have proven experience with working in mission-critical infrastructure, enjoys building and maintaining large-scale data systems with the challenge of varied requirements and large storage capabilities. If you are someone who enjoys building large-scaled big data infrastructure, then this is a great role for you.
WHAT YOU'LL DO
- Develop, scale and improve in-house/cloud and open-source Hadoop related systems (e.g. Spark, Flink, Presto, HDFS, Hive, Kubernetes, EMR, EKS, Glue, IAM, Terraform, etc).
- Investigate new big data technology, and apply it to the DisneyStreaming production environment.
- Build next-gen cloud based big data infrastructure for batch and streaming data applications, and continuously improve performance, scalability and availability
- Handle architectural and design considerations such as performance, scalability, reusability and flexibility issues.
- Advocate engineering best practices, including the use of design patterns, code review and automated unit/functional testing.
- Work together with other engineering teams to influence them on big data system design and optimization.
- Define and lead the adoption of best practices and processes.
- Collaborate efficiently with Product Managers and other developers to build datastores as a service.
- Collaborate with senior internal team members and external stakeholders to gather requirements and drive implementation.
WHAT TO BRING
- BS or MS degree in CS related major
- 5+ years of professional programming and design experience.
- Familiar with one or more open-source big data systems, including HDFS, Yarn, HBase, Hive, Spark, Flink, Presto, Kubernetes, etc. And customize at least one of the above-mentioned systems in code-level. 3+ years of big data experience.
- Willing to dive deep and become an expert in one or more big data systems, ready to open up code of open-source software systems to fix bugs, add new features, improve performance, and contribute back to the community.
- With great technical passion, terrific problem-solving skills, drive for results, and the ability to work independently.
- Good communication and collaboration skills.
- Experience in building in-house big data infrastructure.
- Experience in developing and optimizing Hadoop related components (e.g. HDFS, HBase, Yarn, Hive, Spark, Flink, Presto, Impala)
- Demonstrated ability with cloud infrastructure technologies, including Terraform, K8S, Spinnaker, IAM, ELB, and etc
- Experience in managing a big data cluster with over 1000 nodes
The hiring range for this position in Santa Monica, California is $136,038.00-$182,490.00 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate's geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.