Who we are :
We are changing Fintech industry in USA, Canada and EU, we are changing how millions of People buy and spent through digital currency, how they make their decision etc.
Join us and be part of this revolution !
This role is responsible for the development and implementation of data warehouse solutions for company-wide application and managing large sets of structured data. The candidate will be expected to analyze complex requirements and work on data warehouse, data transformation rules and its implementation.
Data Scientists drive all phases of data aggregation, model development, and model deployment to support internal operations, financial pricing and profitability, and client-facing analytics and data products. They are responsible for analyzing complex, large-scale datasets utilizing statistical methods and machine learning algorithms to develop predictive models and business intelligence solutions. Responsibilities
Skills And Qualifications:
- Perform data analyses on and discover new uses for existing data sources.
- Develop and evaluate the performance of predictive statistical models and selecting features, building and optimizing classifiers using machine learning techniques.
- Data mining using state-of-the-art methods.
- Create and interpret strategic and operational analyses, assess options objectively, and present conclusions and recommendations to all levels of management.
- Creating automated anomaly detection systems and constant tracking of its performance.
- BSCS or equivalent; Masters in Data Sciences .
- Understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc.
- Proficiency in using query languages such as SQL, Hive etc
- Data-oriented approach. Excellent analytical, communication and problem-solving skills.
- Basic understanding of working with Cloud technologies
- Experience in numerical/financial algorithms, data science and market data systems
- Financial analysis skills & Strong RDBMS concepts .
- Working Knowledge of big data tools like Hadoop, Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc
- Good understanding of Background Queues i-e; Celery, RabbitMQ
- Expertise in at least one popular Python framework (like Django, Flask or Pyramid)