Київ, Україна | Дистанційно, Увесь світ
Our client RubyPlay is hiring a Data Engineer to help them scale data infrastructure and tools and provide insights to optimise performance
RubyPlay is a progressive and energetic iGaming development studio that specializes in the design and creation of the most entertaining and engaging slot games, as well as value-add tools for gaming operators. Over 20 years of creating cutting-edge gaming content.
What will you be responsible for?
- Take a leading role in all parts of the lifecycle of our Data product, including requirement analysis, designing the technical architecture, development, testing and deployment of the platform
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a variety of internal and external data sources
- Build analytics tools that utilise the data pipeline to provide actionable insights into operational efficiency and key business performance metrics.
- Provide database expertise to our Platform product development, aiming at benchmark systems, analyzing bottlenecks and propose solutions to scale our gaming system
- Work with stakeholders including the Product, Engineering and Commercial teams to assist with data-related technical challenges and support their data infrastructure needs.
What do we expect from our perfect candidate?
- 5+ years of experience in ETL development
- 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) – GCP is a big advantage!
- 2+ years of experience with API integration/Development
- Advanced working SQL knowledge and experience working with various types of databases
- Extensive industry experience in Software Development, Data Engineering, Business Intelligence, Data Science, or related field with a track record of manipulating, processing, and extracting value from large datasets.
- Experience building and optimising data pipelines, architectures and data sets.
- Experience with Business intelligence tools such as Tableau or Power BI
- Experience in using monitoring tools such as Grafana
- English – professional working proficiency
Nice to have:
- Experience coding and automating processes using Python or R.
- Experience using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.), a big plus
- Experience with Vertica DB
- Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field, a plus
Benefits of working with us:
- Challenging tasks to feel productive
- Opportunity to visit paid conferences/training/workshops to feel fresh minded
- Free English classes with native speakers to feel your growth
- Medical insurance to feel safe
- 20 paid vacation days/7 paid sick leaves to feel humane
- Friendly working environment to feel excited