Trabajo Data Scientist Tecnologías Cloud

Estás son las últimas ofertas de trabajo de Data Scientist Tecnologías Cloud encontradas.

 empleos encontrados

  • 23/05/2022

    Comunidad Valenciana

    Funciones Sopra Steria works to enable our clients' digital transformation and to do so we need to keep growing and contributing thanks to people like you. Our employees agree on the work environment and the great one team that we are at Sopra Steria. With more than 46.000 people working in 25 countries, our mission is to connect talent and technology, trying to help you to find a place where you can grow and develop all your potential. We require a Data Engineer highly skilled in database and ETL data pipelines development. The incumbent will be responsible for the re-design and implementation of the set of automated ETL pipelines, implementation of the analytics of the platform operations and importing new data sources: Work with the team (technical Lead/Architect/other team members) and customer focal point to understand the business need and design/implement the technical data management solution. Assist and work with the Solution Architect and Senior Data Warehouse Specialist to develop, test and deliver the various Work - Packages as further detailed below under "deliverables". Troubleshoot and remediate data problems affecting availability and functionality. Generate and retain relevant technical documentation related to the technical services provided during the project period. Efficiently collaborate with other team members and stakeholders. Ensure alignment with WIPO's technical standards and procedures. Deliver complete technical and user documentation. Refactor existing web analytics ETL pipeline to minimize inter-dependencies and remove hardcoded filters. Migrate metadata storage from S3 to Aurora and implement analytics on this data. Add additional data sources to the Data Platform, estimated time 1 month. Perform other related duties as required. Requisitos Skills: Hands-on experience writing code for Apache Spark with PySpark and Spark SQL (AWS Glue, Databricks, other Spark implementations) Extensive proven experience in data warehouse/ETL development: SQL, CTE, window functions, facts/dimensions High attention to detail Excellent communication skills; spoken and written English Good Understanding of Data engineering pipelines Knowledge of Data pipeline orchestrators and tools such as Azure Data Factory, Azure logic apps, and AWS Glue Knowledge of python Data pipeline development with Pyspark using Apache spark and Databricks Customer-centric approach to delivery and problem solving Se ofrece Because we know what you need... Taking part in innovative and demanding projects. Would you venture to learn something new? Amenities for you and your time. Work won't be everything! Enjoy our benefits and access our Flexible remuneration plan Freekys + Smart Sessions So that you feel as a part of the team: andjoy, padel, running and even a physio just in case Dare yourself to work in a different way and get to know us!

  • 18/05/2022

    Comunidad Valenciana

    We seek new teammates with can-do attitude, creative mindset, curiosity, problem solving and thirst for knowledge. We'd welcome you with open arms! Reporting to Head of Data Engineering, your main objectives are: MAJOR AREAS OF ACCOUNTABILITY: Design and implement data pipelines to ingest heterogeneous data into the datalake / datawarehouse in different scenarios (batch, streaming, ...), managing large and complex data sets. Ensure data accuracy and correctness on the implemented pipelines. Create custom software components and analytics applications. In coordination and collaboration with the data insights team, in charge of the front-end and delivering data products, develop data preparation in order to be used for different purposes and use cases: reporting, machine learning, data sharing, ... and identify opportunities for data acquisition. In coordination and collaboration with the data governance team, explore ways to enhance data quality and reliability. Integrate up-and-coming data management and software engineering technologies into existing data structures. Use agile software development processes to iteratively make improvements to our back end systems. Ensure that all systems meet the business/company requirements as well as industry practices. INTERNAL AND EXTERNAL RELATIONSHIPS: Internal : All Business Services, Product Lines, Architecture, Security, Data Insights & Governance, IT Ops, QA External : IT Partners, External consultants PROFILE: PREVIOUS EXPERIENCE: Proven experience in data engineering or software engineering around data solutions in modern data architectures. EDUCATION LEVEL / CERTIFICATES: Bachelor or Engineering's degree level or higher. LANGUAGES: Written and verbal proficiency in English (mandatory). Other languages practice is appreciated (French, Spanish etc.). TECHNICAL SKILLS: Strong Python and SQL knowledge. Strong knowledge in data integration / ETLs and orchestration tools. Experience with relational SQL and NoSQL databases. Experience in cloud data platforms (GCP, AWS, Snowflake). Experience in continuous integration development techniques. WISHED: Good knowledge in other programming languages, like Java / Kotlin. Knowledge in frameworks to develop streaming pipelines: Kafka, Apache Beam, Spark, ... Knowledge in Terraform/Helm/K8s/Docker. Knowledge on BI and visualization tools. PERSONAL CHARACTERISTICS: Team player with a positive attitude and ability to collaborate effectively. Strong willed and self-motivated. Analytical mindset, process focused and structured. Proactive and self starting.

Búsqueda avanzada

Selecciona ubicación