W

dLocal: Senior DataOps Engineer

We Work Remotely

Full timeWorldwideUSD 0 - 0Posted February 7, 2026
We Work Remotelydevopsremote

About this Role

Headquarters: Barcelona / Madrid URL: http://dlocal.com Why should you join dLocal? dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets

Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly

As both a payments processor and a merchant of record where we operate, we make it possible for our merchants to make inroads into the world’s fastest-growing, emerging markets

By joining us you will be a part of an amazing global team that makes it all happen, in a flexible, remote-first dynamic culture with travel, health and learning benefits, among others

Being a part of dLocal means working with 1000+ teammates from 30+ different nationalities and developing an international career that impacts millions of people’s daily lives

We are builders, we never run from a challenge, we are customer-centric, and if this sounds like you, we know you will thrive in our team

What’s the opportunity? As a Senior DataOps Engineer, you'll be a strategic professional shaping the foundation of our data platform

You’ll design and evolve scalable infrastructure on Kubernetes, operate Databricks as our primary data platform, enable data governance and reliability at scale, and ensure our data assets are clean, observable, and accessible

What will I be doing? Architect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently, using Kubernetes and Databricks as core building blocks

Design, build, and maintain Kubernetes-based infrastructure, owning deployment, scaling, and reliability of data workloads running on our clusters

Operate Databricks as our primary data platform, including workspace and cluster configuration, job orchestration, and integration with the broader data ecosystem

Work in improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency across batch and streaming workloads

Build and maintain CI/CD pipelines for data applications (DAGs, jobs, libraries, containers), automating testing, deployment, and rollback

Implement release strategies (e.g., blue/green, canary, feature flags) where relevant for data services and platform changes

Establish and maintain robust data governance practices (e.g., contracts, catalogs, access controls, quality checks) that empower cross-functional teams to access and trust data

Build a framework to move raw datasets into clean, reliable, and well-modeled assets for analytics, modeling, and reporting, in partnership with Data Engineering and BI

Define and track SLIs/SLOs for critical data services (freshness, latency, availability, data quality signals)

Implement and own monitoring, logging, tracing, and alerting for data workloads and platform components, improving observability over time

Lead and participate in on-call rotation for data platforms, manage incidents, and run structured postmortems to drive continuous improvement

Investigate and resolve complex data and platform issues, ensuring data accuracy, system resilience, and clear root-cause analysis

Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability

Work closely with the Data Enablement team, BI, and ML stakeholders to continuously evolve the data platform based on their needs and feedback

Stay current with industry trends and emerging technologies in DataOps, DevOps, and data platforms to continuously raise the bar on our engineering practices

What skills do I need? Bachelor’s degree in Computer Engineering, Data Engineering, Computer Science, or a related technical field (or equivalent practical experience)

Proven experience in data engineering, platform engineering, or backend software development, ideally in cloud-native environments

Deep expertise in Python or/and SQL, with strong skills building data or platform tooling

Strong experience with distributed data processing frameworks such as Apache Spark (Databricks experience strongly preferred)

Solid understanding of cloud platforms, especially AWS and/or GCP

Hands-on experience with containerization and orchestration: Docker, Kubernetes / EKS / GKE / AKS (or equivalent) Proficiency with Infrastructure-as-Code (e.g., Terraform, Pulumi, CloudFormation) for managing data and platform components

Experience implementing CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins, CircleCI, ArgoCD, Flux) for data workloads and services

Experience in monitoring & observability (metrics, logging, tracing) using tools like Prometheus, Grafana, Datadog, CloudWatch, or similar

Experience with incident management: Participating in or leading on-call rotations

Handling incidents and running postmortems Building automation and guardrails to prevent regressions Strong analytical thinking and problem-solving skills, comfortable debugging across infrastructure, network, and application layers

Able to work autonomously and collaboratively

Nice to have Experience designing and maintaining DAGs with Apache Airflow or similar orchestration tools (Dagster, Prefect, Argo Workflows)

Familiarity with modern data formats and table formats (e.g., Parquet, Delta Lake, Iceberg)

Experience acting as a Databricks admin/developer, managing workspaces, clusters, compute policies, and jobs for multiple teams

Exposure to data quality, data contracts, or data observability tools and practices

What do we offer? Besides the tailored benefits we have for each country, dLocal will help you thrive and go that extra mile by offering you: - Flexibility: we have flexible schedules and we are driven by performance. - Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity. - Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded. - Learning & development: get access to a Premium Coursera subscription. - Language classes: we provide free English, Spanish, or Portuguese classes. - Social budget: you'll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections! - dLocal Houses: want to rent a house to spend one week anywhere in the world coworking with your team? We’ve got your back! What happens after you apply? Our Talent Acquisition team is invested in creating the best candidate experience possible, so don’t worry, you will definitely hear from us

We will review your CV and keep you posted by email at every step of the process! Also, you can check out our webpage, Linkedin, Instagram, and Youtube for more about dLocal! We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses

These tools assist our recruitment team but do not replace human judgment

Final hiring decisions are ultimately made by humans

If you would like more information about how your data is processed, please contact us

To apply: https://weworkremotely.com/remote-jobs/dlocal-senior-dataops-engineer

Requirements

    Benefits & Perks

    About the Company

    W

    We Work Remotely

    Worldwide

    Job Summary

    Job Type
    Full time
    Location
    Worldwide
    Salary
    USD 0 - 0
    Posted
    February 7, 2026

    Interested in this role?

    Don't miss out on this opportunity. Apply now!

    Apply Now