25
I Use This!
Activity Not Available
Analyzed 4 months ago. based on code collected 4 months ago.

Project Summary

Apache Mahout's goal is to build scalable machine learning libraries. With scalable we mean:
Scalable to reasonably large data sets. Our core algorithms for clustering, classfication and batch based collaborative filtering are implemented on top of Apache Hadoop using the map/reduce paradigm. However we do not restrict contributions to Hadoop based implementations: Contributions that run on a single node or on a non-Hadoop cluster are welcome as well. The core libraries are highly optimized to allow for good performance also for non-distributed algorithms

Tags

classifiers datamining machine_learning java regression hadoop data_mining distributed_computing distributed dimension_reduction algorithms clustering mapreduce library collaborative_filtering machinelearning recommender

In a Nutshell, Apache Mahout...

This Project has No vulnerabilities Reported Against it

Did You Know...

  • ...
    Black Duck offers a free trial so you can discover if there are open source vulnerabilities in your code
  • ...
    search using multiple tags to find exactly what you need
  • ...
    55% of companies leverage OSS for production infrastructure
  • ...
    by exploring contributors within projects, you can view details on every commit they have made to that project

Languages

Languages?height=75&width=75
Java
84%
Scala
11%
4 Other
5%

30 Day Summary

Oct 4 2016 — Nov 3 2016

12 Month Summary

Nov 3 2015 — Nov 3 2016
  • 295 Commits
    Up + 50 (20%) from previous 12 months
  • 8 Contributors
    Down -2 (20%) from previous 12 months

Ratings

5 users rate this project:
3.6
   
3.6/5.0
Click to add your rating
   Spinner
Review this Project!