Role
- Designing, developing, and maintaining Hadoop applications
- Developing and optimizing Spark applications for batch and stream processing
- Writing efficient MapReduce, Hive, Pig, and HBase queries, etc
- Implementing data ingestion and processing pipeline
- Optimizing performance and scalability of Hadoop clusters
- Troubleshooting and debugging Hadoop ecosystem components
- Building RESTful APIs and microservices using the Scala framework
Requirements
- Deep technical background in Scala and have applied object-oriented Scala concepts day in and day out (Inheritance, Anonymous Classes, Scala Objects, Abstract Classes, etc).
- Strong hands-on experience in Distributed Frameworks like Spark.
- Strong hands-on experience with Spark RDD, Spark Dataframe, Spark Core, and Spark Streaming.
- Experience or knowledge of Hive or any other equivalent warehouse technology.
- Strong hands-on experience in Shell scripting.
- Familiar with the complete Software development life cycle process.
- Working experience orKnowledge of Git,SonarQube,and control-M.
- Candidate should have an overall understanding of Big Data Technology.
Disclaimer: The company is committed to ensuring the privacy and security of your information. By submitting this form, you consent to the collection, processing, and retention of the information you provide. The data collected (which may include your contact details, educational background, work experience and skills) will be used solely for the purpose of evaluating your qualifications for the position you're applying for. Your data will be stored securely and retained for the duration necessary to fulfill our hiring process. If you are not selected for the position, your data will be kept on file for a limited period in case future opportunities arise. You have the right to access, correct, or delete your data at any time by contacting us at Quess Singapore | A Leading Staffing Services Provider in Singapore (quesscorp.sg)