Trabajo Esp. Big Data Java / J2EE

Estás son las últimas ofertas de trabajo de Esp. Big Data Java / J2EE encontradas.

 empleos encontrados

  • 15/06/2022

    Andalucía, Aragón, Asturias, País Vasco, Cantabria, Castilla y León, Castilla-La Mancha, Cataluña, Extremadura, Galicia, La Rioja, Madrid, Murcia, Navarra, Comunidad Valenciana, No peninsular

    We are a new born venture builder, focused on building new businesses, trying to revolutionize the related industries using the benefits of Decentralized Ledger Technologies (DLT) and Blockchain. We see blockchain, smart contracts as a meta-technology with a huge potential and real applicability. Based on our experience on Banking, Regulation, Product management and Software development we will try to shape the future. To be able to reach our goals, building the right team is the base for success. A good venture will be consequence of having the best performing team. We are creating an environment where you can get your maximun potential, where people matters and where transparency is on the ADN. Requirements: We are searching for good people that, first of all, fit our culture and mindset, and second, add their experience and expertise to our team. These are the skills we are looking for: Your attitude will determine our altitude. Building new ideas, based on complex technologies and industries, will be challenging. The team is key for success and your ideas, positivity and group focus, will make the difference. We are interested in your knowledge but your capacity is more important for us. The are tons of technologies around... The way you tackle and use them, will make the difference. We are trying to get the right balance between attitude and aptitude. Research and learning capabilities. Blockchain space is very new and is evolving very fast. Staying on the trends is key, to help us to make the right decisions. Distributed Ledger Technology and Dapp development experience is very welcome! ... But Don't worry if you have no idea. We will teach you what we know, and learn together. You think that understanding the business domain, language and opportunities is key to define the best solutions. Minimum 3.5 years experience in Data aggregation, ETL implementation, and Data Visualization Minimum 3-5 years experience in Backend development languages. Python, Java and others, You are or willing to be, a language polyglot, due to the importance of getting the best of different technological stacks to create good solutions. Agile development experience with Lean, Scrum, XP etc. Data engineering Open Source tools knowledge and daily usage You have experience in data collection automation work from REST/GraphQL APIs, Flat Files, Database, etc. You are also comfortable with GIT and GitHub. Relational and Non relational database development experience. Event streaming architecture experience, gathering events and building data storage ELK, RabbitMQ handson experience Experience with developing Data lakes in AWS Experience with Visualization tools and ecosystems like apache superset, tableau, looker, power bi, treasure data Proven knowledge with third party data gathering, Google Analytics, Segment, Email providers, Apis etc Experience with working with data pipelines, ETLs like AWS glue Experience with database technologies such as Postgres, MongoDB, Apache Druid, Neo4J. Redshift Experience with Storage systems e.g AWS S3, Parquet Files, time series data Experience deploying with cloud infrastructure providers like AWS Knowledge of big data security best practices Experience working with multiple layer7 protocols, GraphQL, HTTP-Rest, GRPC Fluent English Plus: If you fit these specs, you are ready to be part of the team! But if you can bring one or several of the following as a plus, what are you waiting to work with us? You are passionate about automation. Aspiring Software Craftsman. NFT market proven experience. Event driven architecture design experience. Domain Driven Design practitioner. Experience with blockchain technologie Interest / experience with Big data topics, experience with Docker Experience in deploying blockchain infrastructure Experience building SaaS applications Offer: Challenging projects and enviroment. Innovation and build new business with your own hands. Flat organization and short decision paths. Flexible working hours. Remote working (Based on team needs). Remote work financial aid. We are challenge oriented. Social benefits: Flexible Payment Plan, medical insurance and others. 25 vacation days. 2 free days for conference/courses + Budget.

  • 08/06/2022

    Madrid

    Seleccionamos Data Engineer para participar en proyecto de fabricante de software Big Data líder en su sector. Requisitos: Al menos 1 año de experiencia en Diseño y Desarrollo de procesos en entornos Big Data en particular con SCALA y SPARK (Cloudera). Experiencia en entornos Hadoop , Spark, SCALA, Kafka , Hive, Java, Impala, Flume y otras herramientas delecosistema Cloudera. Conocimientos en servicios de administración y operación. Buen conocimiento de Unix y habilidades SQL y scripting y programación. Nivel medio de inglés para interactuar con otros equipos de manera verbal y escrita. Deseables: Conocimientos en herramientas de integración tecnológicas como herramientas de automatización, control deprocesos, despliegue y planificación (Git , Jenkins, Nexus, JIRA, Confluence ). Conocimientos de Arquitectura de sistemas Conocimientos en bases de datos tanto NoSql como relacionales (Oracle). Conocimiento de herramientas de Modelado de Datos (ERWIN o similar) Conocimiento de herramientas ETL y BI Funciones: Diseñar, desarrollar y depurar procesos y scripts en SCALA & SPARK para la ingesta de fuentes de datos de formaautomatizada y para realizar los cálculos requeridos. Conocimientos de las siguientes herramientas: GitLab, JIRA, Nexus, Jenkins, Confluence, Diseña y desarrolla código, scripts, flujos de datos, e instalación de software. Identifica e ingesta nuevas fuentes de datos en la plataforma de Big Data Integración nuevas fuentes de información Generación de documentación asociada a los interfaces desarrollados Soporte en posibles procesos de tratamiento y réplica de datos, Debe tener conocimientos técnicos específicos y experiencia práctica de al menos un año trabajando en un entorno Hadoop (Cloudera), incluyendo los sistemas operativos, file systems y las herramientas correspondientes. Generar Documentación técnica asociada, Elaboración de informes de seguimiento requeridos Se ofrece: Contrato modalidad freelance Tarifa: 230-280 eur/jornada en función de experiencia aportada Proyecto a largo plazo Recurrencia en proyectos Grandes posibilidades de desarrollo profesional Lugar de trabajo: Madrid.

Búsqueda avanzada

Selecciona ubicación