Jobs Big Data Specialist Amazon Web Service (AWS)

Estás son las últimas ofertas de trabajo de Big Data Specialist Amazon Web Service (AWS) encontradas.


  • 15/06/2022

    Andalusia, Aragon, Asturias, Basque Country, Cantabria, Castile and Leon, Castile-La Mancha, Catalonia, Extremadura, Galicia, La Rioja, Madrid, Murcia, Navarra, Valencian Community, Non-peninsular

    We are a new born venture builder, focused on building new businesses, trying to revolutionize the related industries using the benefits of Decentralized Ledger Technologies (DLT) and Blockchain. We see blockchain, smart contracts as a meta-technology with a huge potential and real applicability. Based on our experience on Banking, Regulation, Product management and Software development we will try to shape the future. To be able to reach our goals, building the right team is the base for success. A good venture will be consequence of having the best performing team. We are creating an environment where you can get your maximun potential, where people matters and where transparency is on the ADN. Requirements: We are searching for good people that, first of all, fit our culture and mindset, and second, add their experience and expertise to our team. These are the skills we are looking for: Your attitude will determine our altitude. Building new ideas, based on complex technologies and industries, will be challenging. The team is key for success and your ideas, positivity and group focus, will make the difference. We are interested in your knowledge but your capacity is more important for us. The are tons of technologies around... The way you tackle and use them, will make the difference. We are trying to get the right balance between attitude and aptitude. Research and learning capabilities. Blockchain space is very new and is evolving very fast. Staying on the trends is key, to help us to make the right decisions. Distributed Ledger Technology and Dapp development experience is very welcome! ... But Don't worry if you have no idea. We will teach you what we know, and learn together. You think that understanding the business domain, language and opportunities is key to define the best solutions. Minimum 3.5 years experience in Data aggregation, ETL implementation, and Data Visualization Minimum 3-5 years experience in Backend development languages. Python, Java and others, You are or willing to be, a language polyglot, due to the importance of getting the best of different technological stacks to create good solutions. Agile development experience with Lean, Scrum, XP etc. Data engineering Open Source tools knowledge and daily usage You have experience in data collection automation work from REST/GraphQL APIs, Flat Files, Database, etc. You are also comfortable with GIT and GitHub. Relational and Non relational database development experience. Event streaming architecture experience, gathering events and building data storage ELK, RabbitMQ handson experience Experience with developing Data lakes in AWS Experience with Visualization tools and ecosystems like apache superset, tableau, looker, power bi, treasure data Proven knowledge with third party data gathering, Google Analytics, Segment, Email providers, Apis etc Experience with working with data pipelines, ETLs like AWS glue Experience with database technologies such as Postgres, MongoDB, Apache Druid, Neo4J. Redshift Experience with Storage systems e.g AWS S3, Parquet Files, time series data Experience deploying with cloud infrastructure providers like AWS Knowledge of big data security best practices Experience working with multiple layer7 protocols, GraphQL, HTTP-Rest, GRPC Fluent English Plus: If you fit these specs, you are ready to be part of the team! But if you can bring one or several of the following as a plus, what are you waiting to work with us? You are passionate about automation. Aspiring Software Craftsman. NFT market proven experience. Event driven architecture design experience. Domain Driven Design practitioner. Experience with blockchain technologie Interest / experience with Big data topics, experience with Docker Experience in deploying blockchain infrastructure Experience building SaaS applications Offer: Challenging projects and enviroment. Innovation and build new business with your own hands. Flat organization and short decision paths. Flexible working hours. Remote working (Based on team needs). Remote work financial aid. We are challenge oriented. Social benefits: Flexible Payment Plan, medical insurance and others. 25 vacation days. 2 free days for conference/courses + Budget.

  • 06/06/2022

    Valencian Community

    Funciones Sopra Steria works to enable our clients' digital transformation and to do so we need to keep growing and contributing thanks to people like you. Our employees agree on the work environment and the great one team that we are at Sopra Steria. With more than 46.000 people working in 25 countries, our mission is to connect talent and technology, trying to help you to find a place where you can grow and develop all your potential. We require a Data Engineer highly skilled in database and ETL data pipelines development. The incumbent will be responsible for the re-design and implementation of the set of automated ETL pipelines, implementation of the analytics of the platform operations and importing new data sources: Work with the team (technical Lead/Architect/other team members) and customer focal point to understand the business need and design/implement the technical data management solution. Assist and work with the Solution Architect and Senior Data Warehouse Specialist to develop, test and deliver the various Work - Packages as further detailed below under "deliverables". Troubleshoot and remediate data problems affecting availability and functionality. Generate and retain relevant technical documentation related to the technical services provided during the project period. Efficiently collaborate with other team members and stakeholders. Ensure alignment with WIPO's technical standards and procedures. Deliver complete technical and user documentation. Refactor existing web analytics ETL pipeline to minimize inter-dependencies and remove hardcoded filters. Migrate metadata storage from S3 to Aurora and implement analytics on this data. Add additional data sources to the Data Platform, estimated time 1 month. Perform other related duties as required. Requisitos Skills: Hands-on experience writing code for Apache Spark with PySpark and Spark SQL (AWS Glue, Databricks, other Spark implementations) Extensive proven experience in data warehouse/ETL development: SQL, CTE, window functions, facts/dimensions High attention to detail Excellent communication skills; spoken and written English Good Understanding of Data engineering pipelines Knowledge of Data pipeline orchestrators and tools such as Azure Data Factory, Azure logic apps, and AWS Glue Knowledge of python Data pipeline development with Pyspark using Apache spark and Databricks Customer-centric approach to delivery and problem solving Se ofrece Because we know what you need... Taking part in innovative and demanding projects. Would you venture to learn something new? Amenities for you and your time. Work won't be everything! Enjoy our benefits and access our Flexible remuneration plan Freekys + Smart Sessions So that you feel as a part of the team: andjoy, padel, running and even a physio just in case Dare yourself to work in a different way and get to know us!

Detailed Job Search

Select location