Data Engineer
Storyblok
Storyblok is the enterprise-ready headless CMS that empowers developers and marketers to bring ideas to market faster. It supports the entire content lifecycle—from creation and management to delivery—streamlining workflows, boosting productivity, and ensuring exceptional performance and accessibility. Storyblok frees you from the pain of legacy CMS platforms and empowers your teams to ship content quickly and build with complete flexibility.
Designed for global scalability and secure collaboration, Storyblok enables teams to deliver seamless, engaging digital experiences at scale. Trusted by leading brands like Oatly, Virgin Media O2, Deliveroo, Renault, and Education First, Storyblok helps businesses of all sizes unlock new opportunities, channels, and markets—delivering a bigger, faster market impact.
WHAT IS IN IT FOR YOU
You will be joining a growing company where you can contribute to many “firsts”. Plus these benefits:
- Monthly remote work stipend (home internet costs, electricity). Home office equipment package right at the start (laptop, keyboard, monitor…)
- Home office equipment upgrade (furniture, ear plugs …) or membership to a local co-working space after your onboarding
- Sick leave benefit, parental leave and 25 days of annual leave plus your local national holidays
- Personal development fund for courses, books, conferences, and material
- VSOP (Virtual Stock Option Plan)
- The annual international team-building trip, quarterly and monthly online get-togethers
- As a fully remote company, with work-life balance at its core, you’ll enjoy flexible schedules
- An international team that loves to have fun at work and works hard together to accomplish shared goals
JOB SUMMARY
Are you passionate about building data infrastructure that drives results? Do you thrive in a fast-paced environment where you can learn and grow? If so, this is the perfect opportunity for you!
About the Role
We're looking for a skilled and proactive Data Engineer to join our growing team. In this role, you'll be responsible for building, maintaining, and optimizing scalable data pipelines that serve as the backbone for analytics, reporting, and data-driven decision-making across the organization. You'll play a critical part in shaping our data infrastructure, with a particular focus on Amazon Redshift and dbt as core technologies in our modern data stack.
You’ll work closely with data scientists, product managers, analysts, and engineers to deliver clean, trusted, and well-documented data. This is an opportunity to work on high-impact data initiatives in a fast-moving environment where technical excellence and pragmatic problem-solving are highly valued.
ESSENTIAL JOB FUNCTIONS
- Design, build, and maintain reliable, scalable data pipelines for ingestion, transformation, and loading (ETL/ELT)
- Develop and optimize data models in Amazon Redshift, aligned with analytical and operational use cases
- Build and maintain dbt models to enable modular, testable, and well-documented transformations
- Implement robust data quality checks and monitoring to ensure high data integrity
- Work cross-functionally to understand data needs and deliver relevant, well-structured datasets
- Continuously refine and improve performance, cost-efficiency, and scalability of data workflows
- Document pipeline architecture, business logic, and data lineage
- Mentor junior team members and contribute to a culture of best practices in data engineering
EDUCATION AND EXPERIENCE
- Bachelor’s degree in Computer Science, Information Systems, or related field—or equivalent professional experience
- 3+ years of experience in data engineering or a closely related field
- Strong SQL skills and experience with scripting languages (Python, Bash, JS/TS)
- Proven experience working with Amazon Redshift as a primary data warehouse
- Hands-on experience with dbt, including model design, testing, and documentation
- Solid understanding of data warehousing principles, including ETL/ELT workflows and dimensional modeling
- Experience with cloud-based infrastructure, preferably AWS
- Strong collaboration and communication skills in cross-functional settings
- Ability to balance technical excellence with practical delivery in a fast-paced environment
Bonus points:
- Experience in a SaaS or product-led growth environment
- Familiarity with orchestration tools like Airflow or Dagster
- Experience implementing CI/CD practices in data pipelines (e.g., GitHub Actions)
- Knowledge of data governance, access control, and security best practices
- Opportunity to work on challenging and impactful projects
- Work with cutting edge technologies in a SaaS environment
- Collaborative and supportive work environment
- Be a part of a fast-paced and growing company
- Chance to learn and grow your skills alongside experienced data scientists
MENTAL, PHYSICAL AND ENVIRONMENTAL REQUIREMENTS
Remote (home) work opportunity or funded by Storyblok co-working space
GENERAL TERMS
Storyblok has a commitment to diversity and inclusion. We strive to create a hiring environment in which all people feel they are equally respected and valued, irrespective of gender identity or expression, sexual orientation, ethnicity, age, religion, citizenship or any other characteristic. You can find more information about our privacy policy here.
All communications regarding job opportunities at Storyblok will come from an official Storyblok employee with an email address ending in @storyblok.com. We will never redirect you to another portal or another site that is unrelated to our domain (storyblok.com).
Here is a sneak peek of Storyblok’s Visual Editor
If you need an accommodation for any part of the application process, please email talent.acquisition@storyblok.com