New User: Apply to selected job without registration
  New User: Apply to selected job and register
Please fill in the below details to apply to the selected job.
If you are registred User, Login to Apply

Manager Analytics – Hadoop

5 - 8 YearsHyderabad

Posted: 46 days ago

Job Description

Job Responsibilities:

- We are looking for a Big Data Engineer who can implement an Enterprise data warehouse on Open source Platform with Hadoop and open source databases; create data pipelines and implement many of the projects from proprietary databases to open source platforms.

- Identify project issues, communicate them and assist in their resolution.
- Perform offline/ online analysis of large data sets using components from the Hadoop ecosystem.
- Involve in big data technologies and prototype solutions to improve our data processing and analytics architecture.

Values Based Competencies:
- Passionate about asking and answering questions through the use of large structured and unstructured data
- Integrity: Shows consistency between action and words on a consistent basis
- Performance: Demonstrates urgency in pursing goals, Puts in extra effort to achieve outcomes required, has a strong commitment to quality and accuracy.

Salary: Not Disclosed by Recruiter
Industry: Banking / Financial Services / Broking
Functional Area: IT Software - Application Programming, Maintenance
Role Category: Programming & Design
Role: Software Developer
Employment Type: Permanent Job, Full Time

Desired Candidate Profile

-Experience Required - 5 to 8 years of experience in Apache Hadoop eco system
- Exposure and experience working on Horton works
- Candidates should be able to pick up one or more new frameworks in short notice as well as react to evolving technology landscape.
- BE/ B. Tech/ MCA/MS in Computer Science or related area
- Around 5 - 7 years of Data Science and database experience and should have worked in large project migrations.
- Minimum 3 - 5 Years of experience on Big Data Platform especially on Apache Spark
- Hands-on experience in building data warehouses, data marts, cubes etc using various products and tools

-Hands-on experience in ETL "process using Open Source tools like Talend or Informatica

Experience with big data tools: Hive, Sqoop, Pig, ZooKeeper, Nifi, mahout, Oozie.
Good experience in Master Data management atleast 1 real time project experience is required
Strong experience in NoSQL databases such as MongoDB, Cassandra, etc.,
Needs to work with technical leads/ business leads/ architects to arrive technical architecture.

- Programming language: Python, Java, R (Good to know)
- Candidate should be hands-on with Pig scripting, Hive queries and Apache Sqoop
- Candidate must have experience with Apache Spark, Scala
- Stream processing: Spark streaming and Apache Kafka
- Candidate must have implemented solutions based on NoSQL DB like HBase, etc.
- Having experience in un structured data especially logs and files will be a plus point
- Maintain security and data privacy.
- Excellent communication and decision making skills are essential.
- Strong analytical, problem solving and decision-making skills.
- Identify project issues, communicate them and assist in their resolution.
- Experience in designing scalable applications, performance tuning expertise is highly expected
- Cooperative and team focused attitude
- Understanding for data, schema, data model, machine learning and how to bring efficiency in big data related life cycle

Company Profile

National Payments Corporation of India
National Payments Corporation of India is an umbrella institution for all the retail payments in the country. The core objective is to consolidate and integrate the multiple systems with varying service levels into nation-wide uniform and standard business process for all retail payment systems. The other objective is to facilitate an affordable payment mechanism to benefit the common man across the country and help financial inclusion.
View Contact Details+
Contact Details

Contact Company:National Payments Corporation of India