25
I Use This!
Moderate Activity
Analyzed 2 days ago. based on code collected 3 months ago.

Project Summary

Apache Mahout's goal is to build scalable machine learning libraries. With scalable we mean:
Scalable to reasonably large data sets. Our core algorithms for clustering, classfication and batch based collaborative filtering are implemented on top of Apache Hadoop using the map/reduce paradigm. However we do not restrict contributions to Hadoop based implementations: Contributions that run on a single node or on a non-Hadoop cluster are welcome as well. The core libraries are highly optimized to allow for good performance also for non-distributed algorithms

Tags

algorithms classifiers clustering collaborative_filtering datamining data_mining dimension_reduction distributed distributed_computing hadoop java library machinelearning machine_learning mapreduce recommender regression

In a Nutshell, Apache Mahout...

This Project has No vulnerabilities Reported Against it

Did You Know...

  • ...
    Black Duck offers a free trial so you can discover if there are open source vulnerabilities in your code
  • ...
    data presented on the Open Hub is available through our API
  • ...
    there are over 3,000 projects on the Open Hub with security vulnerabilities reported against them
  • ...
    search using multiple tags to find exactly what you need

Languages

Languages?height=75&width=75
Java
81%
Scala
13%
4 Other
6%

30 Day Summary

Feb 6 2017 — Mar 8 2017

12 Month Summary

Mar 8 2016 — Mar 8 2017
  • 316 Commits
    Up + 49 (18%) from previous 12 months
  • 9 Contributors
    Down -1 (10%) from previous 12 months

Ratings

5 users rate this project:
3.6
   
3.6/5.0
Click to add your rating
   Spinner
Review this Project!