Apache Mahout's goal is to build scalable machine learning libraries. With scalable we mean:
Scalable to reasonably large data sets. Our core algorithms for clustering, classfication and batch based collaborative filtering are implemented on top of Apache Hadoop using the map/reduce paradigm. However we do not restrict contributions to Hadoop based implementations: Contributions that run on a single node or on a non-Hadoop cluster are welcome as well. The core libraries are highly optimized to allow for good performance also for non-distributed algorithms
Use Patent Claims
These details are provided for information only. No information here is legal advice and should not be used as such.
There are no reported vulnerabilities