Trabajo Data Analyst Bigdata

Estás son las últimas ofertas de trabajo de Data Analyst Bigdata encontradas.

 empleos encontrados

  • 23/06/2022

    Madrid

    Description: Desde Grupo Digital, estamos en búsqueda de un Arquitecto/a Big Data. Duración proyecto: Estable (Larga duración / Sin fecha fin) Ubicación: Madrid Tipo de contrato: Contrato indefinido Modalidad de trabajo: 100% Teletrabajo Salario: A valorar Conocimientos imprescindibles: Senior Data Architect Haddop, HUE(Hive&Impala), sqoop, Spark, GIT para versionado, github para repositorios y JIRA Experiencia en GCP (Google Cloud Platform) Experto en PySpark. Grupo Digital Somos un grupo de empresas tecnológicas, formado actualmente por unos 400 profesionales, en distintas áreas: desarrollo, sistemas, ingeniería automática, consultoría... Si quieres impulsar tu carrera con un proyecto único de la mano de una multinacional de primerísimo nivel, no dudes en presentar tu candidatura. ¡Te esperamos!

  • 23/06/2022

    Madrid

    Description: Desde Grupo Digital, estamos en búsqueda de un Arquitecto/a de Datos (Mongo DB). Duración proyecto: Estable (Larga duración / Sin fecha fin) Ubicación: Madrid Tipo de contrato: Contrato indefinido Salario: A valorar Funciones: Validación de modelos de datos no relacionales (MongoDB) Validación modelos de solución o On-premise - Informix, Kafka, OpenShift, RedHat Fuse, Microservicios o Azure, MongoDB Atlas Validación de diseños técnicos Optimización de consultas SQL (Oracle, Informix) y NoSQL (MongoDB) Elaboración de guías y normativas Realización de scripts Unix Level L8-L9 Grupo Digital Somos un grupo de empresas tecnológicas, formado actualmente por unos 400 profesionales, en distintas áreas: desarrollo, sistemas, ingeniería automática, consultoría... Si quieres impulsar tu carrera con un proyecto único de la mano de una multinacional de primerísimo nivel, no dudes en presentar tu candidatura. ¡Te esperamos!

  • 21/06/2022

    Galicia

    Seleccionamos un/a trabajador/a para realizar tareas de analítica web. Las funciones principales serán: Enfocarse en la analítica digital y el CRO y proponer mejoras que permitan mejorar la calidad y eficiencia del ecosistema digital de la marca. Mejora del etiquetado de activos digitales mediante GOOGLE TAG MANAGER. Preparación y mejora del os dashboards hechos en GOOGLE DATA STUDIO. Requisistos: Grado/licenciatura en: Telecomunicaciones, ingiería, matemáticas o similares. Experiencia en análisis de datos (Data Studio, Google Analytics, etc.) Analista Web.

  • 13/06/2022

    Madrid

    Buscamos a un analista de datos que busque progresar en su carrera en el área de tratamiento de datos, utilizando éstos para resolver problemas de negocio de los clientes. Para ello, deberá poder localizar los datos necesarios, limpiarlos, organizarlos y presentarlos de una manera comprensible al cliente final. El trabajo se realizará sobre plataformas Big Data y Cloud y, aunque no es imprescindible el conocimiento de estas últimas, sí que lo es poder realizar consultas SQL para poder trabajar con los diversos motores de datos. La persona se incorporaría a un equipo con bastante experiencia en datos, que podrá brindarle apoyo hasta poder ser autosuficiente. Requisitos: 1 año de experiencia en entornos de datos Conocimientos de SQL Conocimientos de modelado de datos Deseable: Inglés Herramientas de desarrollo de ETLs como Pentaho, Powercenter o Nifi Generación de informes y dashboards / cuadros de mando Conocimientos de BigData Beneficios: Salario 24k Horario flexible Híbrido (2 días oficina-3 teletrabajo)

  • 06/06/2022

    Comunidad Valenciana

    Funciones Sopra Steria works to enable our clients' digital transformation and to do so we need to keep growing and contributing thanks to people like you. Our employees agree on the work environment and the great one team that we are at Sopra Steria. With more than 46.000 people working in 25 countries, our mission is to connect talent and technology, trying to help you to find a place where you can grow and develop all your potential. We require a Data Engineer highly skilled in database and ETL data pipelines development. The incumbent will be responsible for the re-design and implementation of the set of automated ETL pipelines, implementation of the analytics of the platform operations and importing new data sources: Work with the team (technical Lead/Architect/other team members) and customer focal point to understand the business need and design/implement the technical data management solution. Assist and work with the Solution Architect and Senior Data Warehouse Specialist to develop, test and deliver the various Work - Packages as further detailed below under "deliverables". Troubleshoot and remediate data problems affecting availability and functionality. Generate and retain relevant technical documentation related to the technical services provided during the project period. Efficiently collaborate with other team members and stakeholders. Ensure alignment with WIPO's technical standards and procedures. Deliver complete technical and user documentation. Refactor existing web analytics ETL pipeline to minimize inter-dependencies and remove hardcoded filters. Migrate metadata storage from S3 to Aurora and implement analytics on this data. Add additional data sources to the Data Platform, estimated time 1 month. Perform other related duties as required. Requisitos Skills: Hands-on experience writing code for Apache Spark with PySpark and Spark SQL (AWS Glue, Databricks, other Spark implementations) Extensive proven experience in data warehouse/ETL development: SQL, CTE, window functions, facts/dimensions High attention to detail Excellent communication skills; spoken and written English Good Understanding of Data engineering pipelines Knowledge of Data pipeline orchestrators and tools such as Azure Data Factory, Azure logic apps, and AWS Glue Knowledge of python Data pipeline development with Pyspark using Apache spark and Databricks Customer-centric approach to delivery and problem solving Se ofrece Because we know what you need... Taking part in innovative and demanding projects. Would you venture to learn something new? Amenities for you and your time. Work won't be everything! Enjoy our benefits and access our Flexible remuneration plan Freekys + Smart Sessions So that you feel as a part of the team: andjoy, padel, running and even a physio just in case Dare yourself to work in a different way and get to know us!

  • 23/05/2022

    Comunidad Valenciana

    Funciones Sopra Steria works to enable our clients' digital transformation and to do so we need to keep growing and contributing thanks to people like you. Our employees agree on the work environment and the great one team that we are at Sopra Steria. With more than 46.000 people working in 25 countries, our mission is to connect talent and technology, trying to help you to find a place where you can grow and develop all your potential. We require a Data Engineer highly skilled in database and ETL data pipelines development. The incumbent will be responsible for the re-design and implementation of the set of automated ETL pipelines, implementation of the analytics of the platform operations and importing new data sources: Work with the team (technical Lead/Architect/other team members) and customer focal point to understand the business need and design/implement the technical data management solution. Assist and work with the Solution Architect and Senior Data Warehouse Specialist to develop, test and deliver the various Work - Packages as further detailed below under "deliverables". Troubleshoot and remediate data problems affecting availability and functionality. Generate and retain relevant technical documentation related to the technical services provided during the project period. Efficiently collaborate with other team members and stakeholders. Ensure alignment with WIPO's technical standards and procedures. Deliver complete technical and user documentation. Refactor existing web analytics ETL pipeline to minimize inter-dependencies and remove hardcoded filters. Migrate metadata storage from S3 to Aurora and implement analytics on this data. Add additional data sources to the Data Platform, estimated time 1 month. Perform other related duties as required. Requisitos Skills: Hands-on experience writing code for Apache Spark with PySpark and Spark SQL (AWS Glue, Databricks, other Spark implementations) Extensive proven experience in data warehouse/ETL development: SQL, CTE, window functions, facts/dimensions High attention to detail Excellent communication skills; spoken and written English Good Understanding of Data engineering pipelines Knowledge of Data pipeline orchestrators and tools such as Azure Data Factory, Azure logic apps, and AWS Glue Knowledge of python Data pipeline development with Pyspark using Apache spark and Databricks Customer-centric approach to delivery and problem solving Se ofrece Because we know what you need... Taking part in innovative and demanding projects. Would you venture to learn something new? Amenities for you and your time. Work won't be everything! Enjoy our benefits and access our Flexible remuneration plan Freekys + Smart Sessions So that you feel as a part of the team: andjoy, padel, running and even a physio just in case Dare yourself to work in a different way and get to know us!

Búsqueda avanzada