(Ref. 436/2019) - Big Data Engineer
- Contribute in the design and construction of the company data lake;
- Collect, store, process, and support the analysis of huge sets of data, both structured and unstructured;
- Choosing optimal solutions to use in big data use cases, then maintain, implement, monitor and integrate them with the data and IT architecture used across the company;
- Build and teach the company about big data technologies, participate actively into the journey setup, from the discovery phase until the corporate data centric transformation;
- Build solutions with key concepts: security and privacy by design.
What do I need to bring?
- Degree in Computer Science or equivalent;
- Knowledge of the Linux operation system (OS, networking, process level);
- Understanding of Big Data technologies (Hadoop, Hbase, Spark, Kafka, Flume, Hive, etc);
- 3+ years of building data pipelines experience or equivalent;
- Understanding of one or more object-oriented programming languages (Java, C++, C#, Python);
- Fluent in at least one scripting language (Shell, Python, Ruby, etc.);
- Experience with at least one Hadoop distribution (Cloudera, MapR or preferably Hortonworks);
- Experience building complex data processing pipelines using continuous integration tools;
- Experience with Cassandra, MongoDB or equivalent NoSQL databases;
- Experience developing in an Agile environment.
What will be valued?
- Technical Certifications;
- Experience in designing big data/distributed systems;
- Experience creating and driving large scale ETL pipelines.
What can Syone offer me?
- Integration in an organization with profound and sustained growth and involvement in pioneering projects with innovative technological solutions;
- Strong IT training plans;
- Professional evolution with intervention in ambitious technological projects, both national and internationally.