- Develop and translate algorithms (graph theory, Map/Reduce, recursion, and other areas) into working prototype code
- Create algorithms and heuristics to extract information from large data sets and implement the algorithms in software.
- Mine and organize massive data sets of both structured and unstructured data.
- Define technology and research directions in specific technology fields
- Contribute with an active voice in complex brainstorming sessions and contribute to the creation of working prototypes.
- Employ predictive modeling/data mining techniques to ask and answer business questions
|Worksite: Campbell, CA
- Deep technical knowledge and understanding in one or more of the following: Machine Learning, Distributed Database Design ,Data Visualization
- Experience dealing with large, scalable, or high performance computer systems
- Knowledge of algorithms, data structures, and performance optimization
- Exercise independent judgment in developing methods, techniques and evaluation criteria.
- Comfortable with basic statistical topics (Bayesian vs frequentist, ANOVA, DOE, etc.),
- Strong written and oral communication skills
- Exposure to a statistical package (R, Weka, SAS, SPSS) is a plus
- Experience with Hadoop (HDFS/Hive/Pig/Hbase/Sqoop) or other MapReduce paradigms is a plus
- Exposure to business intelligence tools (e.g., JasperSoft, Crystal Reports, Pentaho, Business Objects) is a plus
- Ideal candidate will have Bachelor’s Degree in Computer Science or related field plus 10+ years of experience in data analytics and/or software engineering Education Masters or PhD in Computer Science, Mathematics, Statistics or technically related field preferred