Data Engineer

Reposted 7 Hours Ago
Hiring Remotely in India
Remote
Mid level
Software
The Role
The Data Engineer designs and develops data pipelines for storage and processing, optimizes data integration, and collaborates with analysts on data models and KPIs, while leveraging cloud and AI technologies.
Summary Generated by Built In
Why Choose Bottomline?

Are you ready to transform the way businesses pay and get paid? Bottomline is a global leader in business payments and cash management, with over 35 years of experience and moving more than $16 trillion in payments annually. We're looking for passionate individuals to join our team and help drive impactful results for our customers. If you're dedicated to delighting customers and promoting growth and innovation - we want you on our team!

Position Summary:

Bottomline is looking for a Data Engineer to grow with us.

The data engineer is responsible for designing, developing, and maintaining the infrastructure and systems required for data storage, processing, and analysis. They play a crucial role in building, managing and test (QA) the data pipelines that enable efficient and reliable data integration, transformation, and delivery for all data users across the enterprise. Bring innovation and add value to organisation by implementing AI driven solutions in different projects in Data Engineering area.

The Data Engineer will work on implementing data flows to make data available in the Enterprise Data Warehouse from systems of record and operational data stores.  The Data Engineer will get to work on best-in-class cloud technologies(Snowflake, Fivetran, AWS, Azure, Salesforce, Airflow etc) and AI projects for Data Engineering Projects.

Along with making data available in the Enterprise Data Warehouse the Data Engineer will work with Data Analysts to implement data models and calculate key business KPIs for the use of the wider business for reporting and analytics.

The Data Engineer will have the opportunity to learn and develop their skills by working on assignments as part of a Scrum Team. They should be delivery focused and driven to problem solve.


How you’ll contribute:

  • Design and develop data pipelines that extract data from various sources, transform it into the desired format, and load it into the appropriate data storage systems. 
  • Collaborate with data scientists and analysts to optimize models and algorithms for data quality, security, and governance. 
  • Integrate data from different sources, including databases, data warehouses, APIs, and external systems. 
  • Ensure data consistency and integrity during the integration process, performing data validation and cleaning as needed. 
  • Transform raw data into a usable format by applying data cleansing, aggregation, filtering, and enrichment techniques. 
  • Optimize data pipelines and data processing workflows for performance, scalability, and efficiency. Come-up with solutions utilising AI concepts through co-pilot, GitHub to improve efficiency and scalability.
  • Monitor and tunes data systems, identifies and resolves performance bottlenecks, and implements caching and indexing strategies to enhance query performance. 
  • Implement data quality checks and validations within data pipelines to ensure the accuracy, consistency, and completeness of data. Utilise AI through Co-pilot or GitHub or any other trending AI tools, to automate and seamlessly query data based on user inputs in plain English to assess accuracy, consistency and completeness
  • Carry out Data QA to validate the pipelines and data post integration as a supporting hand for the QA team
  • Take authority, responsibility, and accountability for exploiting the value of enterprise information assets and of the analytics used to render insights for decision making automated decisions and augmentation of human performance. 
  • Collaborate with leaders to establish the vision for managing data as a business asset. 
  • Establish the governance of data and algorithms used for analysis, analytical applications, and automated decision making. 

 

What will make you successful: 

  • A bachelor’s degree in computer science, data science, software engineering, information systems, or related quantitative field
  • At least four (4) years of work experience in data management disciplines, including data integration, optimization and data quality, or other areas directly relevant to data engineering responsibilities and tasks. 
  • Proven project experience developing and maintaining data warehouses (Snowflake experience is preferable)
  • Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data to support AI, ML, and BI 
  • Strong ability in programming languages such as Java or Python and other scripting languages
  • Previous experience with languages/tools such as SQL
  • Significant experience working in the ETL process and building pipeline for data retrieval using Rest API’s. Knowledge in ETL tool like Talend, Informatica is preferable
  • Proficiency in OLAP, Star, Dimensional, and Snowflake schemas.
  • Basic knowledge of BI Tools – Power BI, Tableau.
  • Basic knowledge of DevOps tools – GitHub, Atlassian Tools, VS Code etc
  • Experience working in a structured development environment (i.e., environment with the standard SDLC process).
  • Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure) and modern data warehouse tools (Snowflake, Databricks)
  • Experience with database technologies such as SQL, NoSQL, Oracle, or Teradata
  • Experience with using AI in Data Engineering/ Data Analysis work and experience in Data QA.
  • Knowledge in Apache technologies such as Kafka, Airflow to build scalable and efficient data pipelines (nice to have).
  • Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products.
  • Expert problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve repetitive problems.
  • Excellent business acumen and interpersonal skills; able to work across business lines at a senior level to influence and effect change to achieve common goals.
  • Ability to describe business use cases/outcomes, data sources and management concepts, and analytical approaches/options.

#LifeAtBottomline

#LI-DNI


We welcome talent at all career stages and are dedicated to understanding and supporting additional needs. We're proud to be an equal opportunity employer, committed to creating an inclusive and open environment for everyone.

Skills Required

  • Bachelor's degree in computer science, data science or a related field
  • At least four years of experience in data management disciplines
  • Experience developing and maintaining data warehouses
  • Proficient in programming languages such as Java or Python
  • Experience with SQL
  • Experience in ETL processes and building data pipelines using Rest APIs
  • Knowledge of ETL tools like Talend or Informatica
  • Experience with cloud services such as AWS or Azure
  • Experience with modern data warehouse tools like Snowflake or Databricks
  • Proficiency in database technologies such as SQL, NoSQL, Oracle or Teradata
  • Experience using AI in Data Engineering or Data Analysis
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Portsmouth, NH
5,395 Employees
Year Founded: 1989

What We Do

Bottomline (NASDAQ: EPAY) makes complex business payments simple, smart, and secure. Corporations and banks rely on Bottomline for domestic and international payments, efficient cash management, automated workflows for payment processing and bill review, and state of the art fraud detection, behavioral analytics and regulatory compliance solutions. Thousands of corporations around the world benefit from Bottomline solutions. Headquartered in Portsmouth, NH, Bottomline delights customers through offices across the U.S., Europe, and Asia-Pacific.

Similar Jobs

Rapid7 Logo Rapid7

Lead Data Engineer

Artificial Intelligence • Cloud • Information Technology • Sales • Security • Software • Cybersecurity
Remote or Hybrid
Pune, Maharashtra, IND
2400 Employees

Coupa Logo Coupa

Artificial Intelligence Engineer

Artificial Intelligence • Fintech • Information Technology • Logistics • Payments • Business Intelligence • Generative AI
In-Office or Remote
Bangalore, Bengaluru Urban, Karnataka, IND
2500 Employees

Skillsoft Logo Skillsoft

Solutions Engineer

Artificial Intelligence • Consumer Web • Edtech • HR Tech • Information Technology • Software • Conversational AI
Remote
India
2900 Employees

Synechron Logo Synechron

Data Engineer

Fintech • Financial Services
Remote
Hinjawadi, Pune, Mahārāshtra, IND
12827 Employees

Similar Companies Hiring

Milestone Systems Thumbnail
Artificial Intelligence • Other • Security • Software • Analytics • Big Data Analytics
Lake Oswego, OR
1500 Employees
Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account