IT Data Engineer III - Remote

Paradigm

Paradigm

Software Engineering, IT, Data Science

Lombard, IL, USA

Posted on Apr 20, 2026

We are seeking a full-time, remote Data Engineer III position. This role supports the continued proliferation of data-driven decision-making throughout Paradigm by designing, building, and maintaining robust data pipelines and architectures. The Data Engineer ensures that clean, reliable, and well-structured data is readily available to analysts, data scientists, and other key stakeholders for advanced analytics, reporting, and operational needs. The ideal candidate has a strong technical background in data integration, data modeling, and modern data engineering tools — specifically Microsoft Power BI, Microsoft Fabric, and a variety of ETL/ELT platforms — along with demonstrated experience working independently to prioritize data projects. Familiarity with healthcare and/or workers’ compensation terminology and business concepts is a plus.

RESPONSIBILITIES:

  1. Design and Maintenance of Data Architectures
    • Architect, build, and optimize Paradigm’s data pipelines, data lake, and data warehouse environments, ensuring efficient ingestion of data from SQL Server, and other sources.
    • Incorporate modern data platform technologies such as Microsoft Fabric where appropriate, to enhance scalability and performance.
  2. Develop and Optimize Data Pipelines
    • Develop and maintain efficient ETL/ELT processes using tools like Azure Data Factory, SSIS, or equivalent platforms.
    • Leverage robust integration tools to automate data flows and ensure data quality.
    • Monitor and optimize these pipelines for performance, scalability, and reliability.
  3. Collaborate with Cross-Functional Teams
    • Work independently with departmental leaders, business users, and analytics teams to define data requirements and prioritize projects with the highest ROI.
    • Partner closely with database administrators, BI developers, data scientists, and application teams to ensure data availability and alignment with business goals.
    • Support and collaborate on Power BI initiatives, enabling self-service analytics.
  4. Data Governance and Documentation
    • Develop and maintain comprehensive documentation for data pipelines, data models, and data governance processes.
    • Produce and update a data dictionary to facilitate the use of self-service analytics tools by non-technical business users.
  5. Implement Data Quality and Security Controls
    • Enforce data governance, security policies, and access controls to ensure data confidentiality, integrity, and compliance with regulatory requirements.
    • Configure and maintain role-based and object-level security for data pipelines, warehouses, and analytics tools.
  6. Change Management and Best Practices
    • Document and refine data engineering change management policies, aligning with industry best practices to maintain stable, secure, and version-controlled environments (e.g., using Azure DevOps, AWS Pipelines, or similar tools).
  7. Evaluate and Implement New Technologies
    • Research and recommend new data engineering tools, frameworks, and solutions, with a particular focus on the Microsoft technology stack (e.g., Microsoft Fabric, Power BI).
    • Maintain awareness of industry trends to inform Paradigm’s strategic data architecture decisions.
  8. Support Ad Hoc Data Needs
    • Work closely with business users on ad hoc data requests, troubleshooting data issues, and providing solutions in a timely manner.
  9. Production Support and Monitoring
    • Proactively monitor and troubleshoot data pipelines and integrations across SQL Server, DB2, Oracle, and cloud environments to minimize downtime.
    • Quickly resolve incidents to ensure seamless data availability.
  10. Champion Data-Driven Culture
    • Advocate for data best practices across the organization, helping to empower both technical and non-technical stakeholders to leverage data effectively.
    • Encourage wider adoption of Power BI for data visualization and analytics.

QUALIFICATIONS:

  • Education
    • Bachelor’s degree in Computer Science, Information Systems, or a related field required.
  • Experience
    • 5+ years of data engineering or related experience, with a focus on designing and developing data solutions using Microsoft or equivalent enterprise BI and data platform tools.
    • Strong SQL knowledge, including both DDL and DML, with exposure to SQL Server, DB2, and Oracle.
    • Experience with ETL/ELT tools such as SSIS, Azure Data Factory, or equivalent solutions.
    • Familiarity with Power BI (development, administration, or deployment).
    • Exposure to Microsoft Fabric or similar data platforms is highly desirable.
    • Source control (e.g., Azure DevOps, Git) for versioning and change management.
    • Cloud platform experience (Azure preferred, AWS, or GCP) especially with data storage and data processing components.
    • Proficiency in at least one programming or scripting language (Python, PowerShell, SQL, DAX/MDX, etc.) is beneficial.
    • Knowledge of big data frameworks (e.g., Spark) or streaming platforms (e.g., Kafka) is a plus.
  • Technical Skills
    • SQL Server Management Studio, Visual Studio, or equivalent IDEs.
    • Dimensional modeling (Kimball or similar methodologies) for data warehousing.
    • Working knowledge of modern data infrastructure concepts (data lakes, lakehouses, etc.).
  • Other Skills
    • Proficient in base Microsoft Office applications (Excel, Word, PowerPoint, Outlook). Advanced Office knowledge (SharePoint, Teams, Visio) is a plus.
    • Strong analytical and problem-solving abilities, with excellent communication skills to engage both technical and non-technical stakeholders.