- Data Architecture & Schema Design: Design, implement, and manage robust data schemas and pipelines tailored for AI workflows across systems and integrations, including the core application, model training, fine-tuning, and evaluation.
- Database Design & Data Modeling: Design and maintain scalable, efficient, and AI-optimized data models and database architectures (relational and NoSQL) to support data ingestion, transformation, and retrieval for generative AI and application needs.
- Dataset Curation: Lead the creation, organization, and versioning of datasets used in model development (structured and unstructured), including data labeling and augmentation workflows.
- Metadata & Lineage: Develop and maintain data and metadata tracking systems for datasets and AI models, enabling traceability, reproducibility, and responsible AI practices.
- Data Governance & Security: Enforce data privacy, compliance (e.g., GDPR, HIPAA), and security best practices throughout the data lifecycle.
- Cross-functional Collaboration: Work closely with data scientists to understand data needs for fine-tuning and experimentation; partner with product teams to ensure data alignment with application requirements.
- Quality & Validation: Implement automated validation, lineage tracking, and quality assurance mechanisms to ensure data reliability at scale.
- Tooling & Automation: Build or integrate tools to support data versioning, synthetic data generation, and performance monitoring.
- Documentation & Standards: Define and promote best practices for dataset documentation, data contracts, and data lineage to ensure consistency and usability across teams.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Science, or a related field.
- Proficiency in Python, SQL, and ETL.
- Deep understanding of structured and unstructured data handling.
- Strong grasp of data modeling, metadata systems, and schema evolution.
- Experience implementing data governance, security, and privacy controls in regulated environments.
- Familiarity with tools like DVC, MLflow, Hugging Face Datasets, or custom dataset/metadata management systems.
- Experience supporting generative AI applications or LLM fine-tuning workflows.
- Familiarity with synthetic data generation and data augmentation strategies.
- Working knowledge of cloud platforms (AWS, GCP, Azure) and infrastructure tools like Docker.
- Exposure to data contracts and API-based data delivery for downstream AI applications.
- Knowledge of responsible AI, FAIR data principles, or machine learning compliance frameworks.
Similar Jobs
What We Do
RegScale overcomes speed, timeliness, and cost effectiveness limitations in legacy GRC by bridging security, risk, and compliance through our Continuous Controls Monitoring platform.
Our CCM pipeline of automation, dashboards, and AI tools deliver lower program costs, strengthen security, and minimize painful handoffs between teams. Achieve rapid certification for faster market entry, anticipate threats via proactive risk management, and automate evidence collection, access reviews, and controls mapping. Improve the Return on
Investment (ROI) of existing tools by seamlessly exchanging data with our centralized CCM data lake, enabling continuous monitoring of security, risk, and compliance controls. Heavily regulated organizations, including Fortune 500 enterprises – both financial institutions and other sectors – as well as the government and entities that serve them, use RegScale to enhance stakeholder trust, lower costs, adapt to evolving risks, and start and stay compliant. Our customers report a 90% faster path to compliance certifications and a 60% reduction in audit preparation efforts, strengthening security programs and reducing costs. For more information, visit www.regscale.com









