Big Data Engineer (Dallas, TX)

| Dallas-Fort Worth, TX
Apply Now
By clicking continue you agree to Built In’s Privacy Policy and Terms of Use.
What you'll do...
Position: Big Data Engineer
Job Location: 350 N Saint Paul St, Dallas, TX 75201
Duties: Defines the data elements and Business requirements for the Enterprise Data Lake. Analyzes and strategizes programs to collect, store and visualize data from various sources and Design data elements into a structured Data Model. Designs the Data Model at Enterprise level using Modeling tool like Erwin, Visio, and Lucid Chart. Prepares and documents Metadata definition for Enterprise Data Lake. Designs and develops Big data applications using Core Java, Scala; deploys the applications and workflow into cloud environments. Hadoop eco system using real-time streaming technologies using Spark. Experience with cloud-based systems including Infrastructure and Platforms as a Service. Works with NoSQL databases Cassandra or MongoDB. Utilizes ETL Technologies and creates data pipelines; interprets business requirement converting to technical design. Wring shell scripts using Linux to invoke and schedule applications; proficient knowledge on version control system like GIT or SVN. Writes complex SQL and Database level program like Procedures, Functions, and Triggers. using Oracle SQL and PL/SQL along with real time monitoring and Tuning the Applications using Database Tools SQL Explain-Plan, SQL Profiler, AWR report, and SQL Analyzer.
Minimum education and experience required: Bachelor's degree or the equivalent in Computer Science, Information Technology, Engineering, or a related field plus 5 years of experience in software engineering or related experience.
Skills Required: Must have experience with: Coding and Testing in Relational Database Management Systems (Oracle); Performance tuning in SQL and PL/SQL; Performing impact analysis in database objects; Building dashboards in Tableau; Performing server level activities using Unix shell script; Job monitoring, scheduling and configuration in UC4/Automic; Creating data models such as Star and Snowflake schema; Designing ETL processes and pipelines in Hadoop using Hive/Spark or Python; and Building data pipeline jobs using Cloud technologies such as GCP, Azure or AWS. Employer will accept any amount of experience with the required skills.
Wal-Mart is an Equal Opportunity Employer.
More Information on Walmart
Walmart operates in the Food industry. The company is located in Bentonville, AR, San Bruno, CA and Sunnyvale, CA. Walmart was founded in 2022. It has 578950 total employees. It offers perks and benefits such as Flexible Spending Account (FSA), Disability Insurance, Dental Benefits, Vision Benefits, Health Insurance Benefits and Life Insurance. To see all 896 open jobs at Walmart, click here.
Read Full Job Description
Apply Now
By clicking continue you agree to Built In’s Privacy Policy and Terms of Use.

Similar Jobs

Apply Now
By clicking continue you agree to Built In’s Privacy Policy and Terms of Use.
Save jobView Walmart's full profileFind similar jobs