Senior Data Engineer

Posted 17 Days Ago
Be an Early Applicant
Canada
1-3 Years Experience
Healthtech • Software
The Role
Seeking a Senior Data Engineer to work on collecting, storing, processing, and analyzing huge sets of data. Responsibilities include building data pipelines, creating integration pipelines, working with databases, and implementing process improvements. Must have 2+ years of experience with cloud-native production systems on AWS, strong Python and SQL coding skills, and experience with data cleansing and ETL workloads.
Summary Generated by Built In

Senior Data Engineer

Our Purpose

P\S\L Group is a global organisation dedicated to putting information at the service of medicine. The companies and people of the P\S\L Group aim to improve medical care by serving those who need it, those who provide it and those who seek to improve it.


Our primary purpose is to help clients increase the effectiveness of activities pertaining to scientific communication, medical education and product/service marketing. To this end, we want our information services to contribute to the goals we share with our clients, namely: to accelerate the advancement of medicine and help people enjoy better, longer lives.


Objective

If you are a Data Engineer with a craving for making sense out of structured and unstructured data with the goal of affecting people’s lives in a positive manner, please read on!


We are looking for a Data Engineer that will work on collecting, storing, processing, and analyzing huge sets of data. The focus will be on working with the Data Engineering Team to design technologies that will wrangle, standardize and enhance our master data and transactional data repositories, then build operational and monitoring processes to govern that data. You will also be responsible for federation of this data across the enterprise using batch, streaming and microservices architectures.


Unique skills expected for this job are the ability to write clean, high-quality Python  libraries that can be re-used within our platform; ability to create cloud-native  orchestration workflows that ingest structured and unstructured data in both streaming and batch modes; enrich and make it available for use throughout the enterprise.


What you will do 

  • Collaborate and work closely with product owners to define product features and road map
  • Build the infrastructure required for optimal data pipelines from a wide variety of data sources using Python, AWS services and big data tech
  • Create and maintain enterprise-wide integration pipelines and API-based microservices leveraging various AWS services including Step Functions, Lambda, Fargate, Firehose etc. - following microservices\microbatch architectural  best practices
  • Design and work with databases running on PostgreSQL, Snowflake, Dynamodb, ElasticSearch, Redis
  • Build pipelines and APIs to fulfill both OLTP (high volume/low latency) use cases, as well as OLAP (batch/ETL) bulk load use cases
  • Identify, design, and implement internal process and platform improvements: automating manual processes, optimizing data delivery, redesigning for greater scalability etc.
  • Work with stakeholders including the Executive, DataOps and Business teams to assist with data-related technical issues and support related data infrastructure needs.


Who you are 

  • Posses a minimum of 2 years experience implementing cloud-native production systems on AWS
  • Posses excellent Python coding skills
  • Posses intermediate level SQL coding skills
  • Excellent analytical and problem solving skills
  • Application Integration experience leveraging microservices and microbatching
  • Experience with data cleansing, data wrangling, data quality, standardization,  transformations and other ETL type workloads
  • Experience with AWS cloud services: S3, API Gateway, Lambda, SQS, Step  Functions, Fargate etc
  • Experience with AWS SAM (Serverless Application Model) and serverless features and concepts
  • Experience with SQL and NoSQL databases, including Aurora PostgreSQL, Snowflake, DynamoDB and ElasticSearch
  • Experience with devops services and tools: git, CloudFormation, CodePipeline etc
  • Advanced working SQL knowledge and experience working with relational databases -  both operational DBs and data warehouses
  • Strong analytic skills related to working with unstructured datasets
  • BS/MS in Math, Computer Science, or equivalent experience

Top Skills

Python
The Company
HQ: New York, New York
949 Employees
On-site Workplace

What We Do

P\S\L Group is a global organization dedicated to putting information at the service of medicine.

The companies and people of the P\S\L Group aim to improve medical care by serving those who need it, those who provide it, and those who seek to improve it.

Our clients include research-based pharmaceutical companies and other healthcare institutions who seek the advancement of medicine. Our primary business purpose is to help these clients increase the effectiveness, as well as lower the cost, of activities pertaining to scientific communication, medical education, or product/service marketing.

We are borderless and we have presence in the US, Canada, Mexico, UK, EU, South Africa, and Malaysia.

Jobs at Similar Companies

Louisville, CO, USA
69 Employees
80K-134K Annually

Cencora Logo Cencora

Engineer III - Software Engineering (IN) Fullstack

Healthtech • Logistics • Software • Pharmaceutical
Pune, Maharashtra, IND
46000 Employees

Similar Companies Hiring

TrainHeroic (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
23 Employees
TrainingPeaks (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
69 Employees
Cencora Thumbnail
Software • Pharmaceutical • Logistics • Healthtech
Conshohocken, PA
46000 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account