Responsible for working with a team of analysts, developers, testers or engineers and be part of the delivery within a project (Delivery/Maintenance/Testing). You are responsible for delivery of part of a functional/technical track of a project and will work with teams on an Agile model of delivery.
Your experience ideally will include the following:
• Experience in data management solutions development, and deployment
• Proven track record of end-to-end implementation of integrated data management solutions leveraging the Hadoop Ecosystem and Spark with Python
• Should have a good understanding of Core Java principle
• A strong willingness to learn new technologies and ability to think critically is required
• Exposure to AWS eco system and preferably hands-on experience in AWS tool stack.
• UNIX/ Linux skills, including basic commands, shell scripting, and system administration/configuration will be good to have
• Main Goal of this job is to migrate RDBMS structure system to Hadoop structure system and integration of BI tool with Hadoop
• 3-5 years of experience using Hadoop or related technologies such as Hive, Impala, Solr, HBase and/or AVRO
• Should have working experience on streaming technologies like Spark, Kafka, Flink and/or Flume
• Work closely with team members from across departments to identify functional and system requirements
• Design, develop and implement data models with quality and integrity at the top of mind to support our products, Specifically how we can migrate from RDBMS structure to Hadoop Structure
• Create documentation to support knowledge sharing; including flowcharts and diagrams
• Develop software utilizing open source technologies to interface distributed and relational data solutions
• Work to establish a Hadoop Cluster architecture
Telecom solution providers in the areas of Convergent Billing, CRM, Data Warehouse, Business Intelligence, Revenue Assurance and Fraud Management. Network Consolidation and Bandwidth optimization.