Senior Manual QA Engineer (with API, AI, Database & Performance Testing Exposure)SaaS
Job description
Please note : that we are open to negotiation based on the your experience and compensation expectations. Strong communication skills (10/10) are essential for this role. Experience in AI or chatbot testing will be considered a valuable advantage
We are seeking a highly detail-oriented and experienced Manual QA Engineer to join our dynamic product delivery team. This role is pivotal in orchestrating the end-to-end quality assurance lifecycle—including comprehensive manual test execution, strategic test case development, API validation, backend/database verification, and non-functional performance assessments. The ideal candidate will operate at the intersection of quality, collaboration, and innovation, driving scalable, defect-free software releases that align with both business and user expectations.
Key Responsibilities
o Test Planning and Strategy : Collaborate with product owners, business analysts, and development teams to perform detailed requirement analysis and convert them into robust test strategies.
o Derive and prioritize high-coverage test scenarios, integrating both functional and non-functional dimensions aligned with sprint goals and release roadmaps.
o Define entry/exit criteria, traceability matrices, risk-based testing approaches, and test schedules in alignment with delivery milestones.
o Test Case Creation & Management
o Design modular, reusable, and comprehensive test cases mapped to user stories, use cases, and acceptance criteria using tools like TestRail, Zephyr, or Xray.
o Include positive, negative, edge-case, boundary, exploratory, and UI/UX-focused validations to ensure maximum test coverage.
o Apply advanced test case optimization methods (e.g., decision tables, equivalence partitioning, state transitions) to ensure test efficiency and effectiveness.
API Testing
o Validate RESTful and SOAP APIs using tools like Postman, Swagger, or REST Assured, ensuring endpoints behave correctly under valid and invalid conditions.
o Verify authentication mechanisms (OAuth, JWT), data contracts (JSON/XML schema), HTTP status codes, and response time SLAs.
o Create automated API test suites and data-driven scenarios where feasible to reduce regression cycle time.
Database Testing
o Perform backend validation of transactional workflows by executing complex SQL queries across normalized schemas (e.g., MySQL, PostgreSQL, Oracle, MongoDB).
o Ensure data integrity, consistency, CRUD operations, constraints, and business rules are properly enforced at the database layer.
o Validate data synchronization across integrated systems and downstream data pipelines when applicable.
Manual Test Execution
o Execute test cases manually across diverse test environments, operating systems, and device profiles including cross-browser and responsive validation.
o Conduct various levels of testing such as: Functional, Regression, Smoke/Sanity, Integration, System and UAT support
Localization/Accessibility (where applicable)
o Performance and Load Testing (Basic)
o Collaborate with performance engineers or utilize tools like JMeter or BlazeMeter to simulate user load and capture baseline metrics (TPS, memory usage, response time).
o Execute lightweight load tests on APIs or key application workflows to identify bottlenecks early in the SDLC.
o Contribute to performance tuning recommendations and assist in validating scalability improvements post-deployment
Defect Lifecycle Management
o Log, triage, and track defects using enterprise-level defect tracking tools like JIRA, Bugzilla, or Azure DevOps.
o Perform root cause analysis (RCA) in collaboration with development teams and ensure retesting and regression closure of resolved issues.
o Maintain rigorous defect documentation with replication steps, system logs/screenshots, and environmental metadata.
Test Reporting and Metrics
o Generate daily/weekly quality reports, DRE (Defect Removal Efficiency), defect density trends, and test execution dashboards.
o Provide QA sign-off with risk analysis, unresolved issue log, and exit report for release governance.
Collaboration and Agile Engagement
o Work within Agile/Scrum or hybrid delivery models, actively participating in sprint planning, backlog grooming, retrospectives, and product demos.
o Foster a collaborative QA culture by sharing best practices, mentoring junior team members, and contributing to a shared test repository or QA Wiki.
Continuous Process Improvement
o Evaluate and adopt new tools, methodologies, and techniques in manual and semi-automated testing to streamline QA processes.
o Proactively suggest improvements to test coverage, defect prevention, and agile workflows through retrospectives and KPIs.
Required Qualifications
o Bachelor's degree in Computer Science, Engineering, or related technical discipline.
o 3–5 years of professional experience in manual testing, with proven ownership of quality assurance for multiple product releases.
o Strong understanding of STLC, SDLC, Agile methodologies, QA metrics, and traceability models.
o Proficiency in writing test cases, test scenarios, and defect reports aligned with user stories and requirements.
o Solid hands-on experience in API testing, backend data validation, and basic performance testing frameworks.
o Exposure to CI/CD pipelines, Git workflows, and test integration in build automation tools is a plus.
o Superior communication, documentation, and analytical thinking skills
Top Skills
What We Do
Mantra is the parent company behind Meera.ai, an AI-powered texting platform that helps sales and admissions teams reach leads faster, qualify them with human-like conversations, and drive more conversions.
In addition to building Meera, Mantra previously operated Legal Growth Marketing—a performance-based lead generation company—which was recently acquired by a major media and technology group backed by one of the world’s largest private equity firms.