Key Responsibilities
- Architect and optimize data warehouse platforms and underlying database systems.
- Design and implement data ingestion and transformation pipelines (ETL/ELT) using ADF, DBT, or similar tools to ensure reliable ingestionfrom multiple sources (SFTP, HL7, FHIR, etc).
- Develop and maintain logical and physical data models supporting internal, analytics, dashboards, and client facing reporting applications,
- Ensure database performance, scalability, and reliability through tuning, indexing strategies, and proactive monitoring across production and staging environments.
- Lead strategic data initiatives by establishing architecture standards that enhance scalability, efficiency and cost optimization across the data platform.
- Collaborate cross-functionally with software engineering, implementation, operations, and business teams to align on data needs and governance practices.
- Administer, Maintain and Monitor database systemsto ensure availability, integrity, recoverability and secure access in alignment with disaster recovery readiness and business continuity best practices.
- Implement source control and DevOps practices (e.g., GitHub, ADO, etc) for database change management, CI/CD automation, and deployment of database changes.
- Mentor junior engineers in SQL development, reporting, analytics, and performance tuning best practices
- Implement and enforce data governance standards for security, compliance, and consistency across all environments.
Qualifications
- 10+ years of experience in database engineering, architecture, or administration, including production-grade SQL Server environments.
- Proven experience designing and automating ETL pipelines and data warehouse solutions supporting analytics, reporting, and business intelligence platforms (e.g., Power BI, Tableau).
- Strong expertise in database performance optimization and modernization across on-prem and cloud environments, including SQL tuning, indexing, and cost/performance optimization for multi-tenant systems.
- Hands-on experience with MS-SQL, MySQL, PostgreSQL, and other Database Technologies
- Familiarity with CI/CD pipelines, source control systems, and automated deployment of database artifacts.
- Knowledge of cloud data architectures and cost optimization in Snowflake orsimilar platforms.
- Strong understanding of data modeling, schema design, and normalization principles.
- Excellent collaboration and communication skills with the ability to translatecomplex data concepts into actionable insights.
What Success Looks Like
- Stabilize Core Data Infrastructure (Reliability, Security, and Performance)
- Maintain fast, reliable, and scalable databases that support both internal analytics and client-facing reporting.
- Database systems are secure, recoverable, and compliant — fully aligned with disaster recovery and business continuity standards.
- Ensure that reports and analytics can be run with no impact to performance of production systems.
- Automate and Evolve Data Operations
- ETL/ELT pipelines are automated, resilient, and efficient, capable of ingesting data from multiple formats (SFTP, HL7, FHIR, etc.) with minimal manual intervention.
- The data platform supports faster insights, more accurate reporting, and improved decision-making across the enterprise.
- Enable Advanced Analytics and Self-Service
- The ability for end users to quickly create and schedule their own reports.
- The ability to analyze data across multiple domains and/or multiple domains across multiple clients.
- Innovate and Differentiate
- Successful implementation of ChatGPT-like model using Retrieval-Augmented Generation (RAG) of data within the Abax Health domain.