Publicar Oferta Registrarme

Consultor Big Data

col-narrow-left   

ID Oferta:

13547

Lugar:

Ciudad Autónoma de Buenos Aires, Buenos Aires 

Categoría:

Administrador de Base de Datos, Consultor, Data Analytics, Data Science, DBA
col-narrow-right   

Fecha:

12/10/2019
col-wide   

Descripción de la Oferta:

Se busca consultor en ERP para trabajar en Puerto Madero, en forma presencial de 9 a 18 hs. Se ofrecerelación de dependencia, obra social y otros beneficios.

Hadoop Developer Roles and Responsibilities:

 

The following are the tasks a Hadoop Developer is responsible for:

Hadoop development and implementation.

Loading from disparate data sets.

Pre-processing using Hadoop ecosystem tools.

Translate complex functional and technical requirements into detailed design

Design, develop, document and architect Hadoop applications

Perform analysis of vast data stores and uncover insights.

Maintain security and data privacy.

Create scalable and high-performance web services for data tracking.

High-speed querying.

Propose best practices/standards.

Hadoop Developer Work Routine and Skills:

Loading data from different datasets and deciding on which file format is efficient for a task

Ability to work with huge volumes of data so as to derive Business Intelligence

Apply different HDFS formats and structure like Parquet, Avro, etc. to speed up analytics

Analyze data, uncover information, derive insights and propose data-driven strategies

A knowledge of OOP languages like Java, C++, Python is good to have

Writing high-performance, reliable and maintainable code

Familiarity with data loading tools like Flume, Sqoop

Knowledge of workflow/schedulers like Oozie

Analytical and problem solving skills, applied to Big Data domain

Proven understanding with Hadoop, HBase, Hive, Pig, and Hbase

Good aptitude in multi-threading and concurrency concepts

Knowledge of agile methodology for delivering software solutions

Work with Hadoop Log files to manage and monitor it

Develop MapReduce coding that works seamlessly on Hadoop clusters

Working knowledge of SQL, NoSQL, data warehousing & DBA

Expertise in newer concepts like Apache Spark and Scala programming

Complete knowledge of the Hadoop ecosystem

Seamlessly convert hard-to-grasp technical requirements into outstanding designs

 
Trabajos Similares

Analisa desarrollador microestrategy

Giannazzo & Asoc. Ciudad Autónoma de Buenos Aires, Buenos Aires

Consultor ERP AP /AR

Giannazzo & Asoc. Ciudad Autónoma de Buenos Aires, Buenos Aires

Consultor técnico en ERP

Giannazzo & Asoc. Ciudad Autónoma de Buenos Aires, Buenos Aires
Empresa
Giannazzo & Asoc. arevalo 2100
Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
Sitio Web: www.giannazzoyasoc.com.ar
Más ofertas




Empresa


Giannazzo & Asoc.
arevalo 2100
Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
Teléfono: 35313248
Sitio Web: www.giannazzoyasoc.com.ar

Consultor Big Data

col-narrow-left   

ID Oferta:

13547

Lugar:

Ciudad Autónoma de Buenos Aires, Buenos Aires 

Categoría:

Administrador de Base de Datos, Consultor, Data Analytics, Data Science, DBA
col-narrow-right   

Fecha:

12/10/2019
col-wide   

Descripción de la Oferta:

Se busca consultor en ERP para trabajar en Puerto Madero, en forma presencial de 9 a 18 hs. Se ofrecerelación de dependencia, obra social y otros beneficios.

Hadoop Developer Roles and Responsibilities:

 

The following are the tasks a Hadoop Developer is responsible for:

Hadoop development and implementation.

Loading from disparate data sets.

Pre-processing using Hadoop ecosystem tools.

Translate complex functional and technical requirements into detailed design

Design, develop, document and architect Hadoop applications

Perform analysis of vast data stores and uncover insights.

Maintain security and data privacy.

Create scalable and high-performance web services for data tracking.

High-speed querying.

Propose best practices/standards.

Hadoop Developer Work Routine and Skills:

Loading data from different datasets and deciding on which file format is efficient for a task

Ability to work with huge volumes of data so as to derive Business Intelligence

Apply different HDFS formats and structure like Parquet, Avro, etc. to speed up analytics

Analyze data, uncover information, derive insights and propose data-driven strategies

A knowledge of OOP languages like Java, C++, Python is good to have

Writing high-performance, reliable and maintainable code

Familiarity with data loading tools like Flume, Sqoop

Knowledge of workflow/schedulers like Oozie

Analytical and problem solving skills, applied to Big Data domain

Proven understanding with Hadoop, HBase, Hive, Pig, and Hbase

Good aptitude in multi-threading and concurrency concepts

Knowledge of agile methodology for delivering software solutions

Work with Hadoop Log files to manage and monitor it

Develop MapReduce coding that works seamlessly on Hadoop clusters

Working knowledge of SQL, NoSQL, data warehousing & DBA

Expertise in newer concepts like Apache Spark and Scala programming

Complete knowledge of the Hadoop ecosystem

Seamlessly convert hard-to-grasp technical requirements into outstanding designs