Tags : Browse Projects

Select a tag to browse associated projects and drill deeper into the tag cloud.

GlusterFS

Compare

  Analyzed over 1 year ago

GlusterFS is a distributed file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 server with SATA RAID, and can use Infiniband HBAs'.

1.4M lines of code

171 current contributors

over 1 year since last commit

23 users on Open Hub

Activity Not Available
4.6
   
I Use This
Licenses: GNU-GPLv2, lgpv3_or_...

JBoss Cache

Compare

Claimed by JBoss Analyzed 4 months ago

JBoss Cache is a product designed to cache frequently accessed Java objects in order to dramatically improve the performance of applications. By eliminating unnecessary database access, JBoss Cache decreases network traffic and increases the scalability of applications. In addition, JBoss Cache ... [More] is a clustering library allowing you to transparently share objects across JVMs across a cluster. JBoss Cache provides three APIs to suit your needs. The Core API offers a tree-structured node-based cache. A POJO API provides the ability to perform fine-grained replication of Java objects, resulting in maximum performance benefits. Finally, a new Searchable API allows you to run object-based queries on the cache to search for cached objects. [Less]

160K lines of code

0 current contributors

almost 7 years since last commit

8 users on Open Hub

Activity Not Available
5.0
 
I Use This

Cluster SSH - Cluster Admin Via SSH

Compare

  Analyzed about 8 hours ago

ClusterSSH controls a number of xterm windows via a single graphical console window to allow commands to be interactively run on multiple servers over an ssh connection.

6.12K lines of code

0 current contributors

over 4 years since last commit

4 users on Open Hub

Inactive
4.0
   
I Use This
Licenses: No declared licenses

xcpu

Compare

Claimed by Los Alamos National Lab Analyzed about 15 hours ago

The XCPU project comprises of a suite of tools for cluster management. It includes utilities for spawning jobs, management of cluster resources, scalable distribution of boot images across a cluster as well as tools for creation and control of virtual machines in a cluster environment.

0 lines of code

0 current contributors

0 since last commit

2 users on Open Hub

Activity Not Available
4.33333
   
I Use This
Mostly written in language not available
Licenses: No declared licenses

App::FQStat

Compare

  Analyzed about 11 hours ago

App::FQStat is the internal module that runs the fqstat.pl tool. fqstat is an interactive, console based front-end for Sun's Grid Engine (http://gridengine.sunsource.net/). This has grown out of an in-house tool I wrote just for convenience, but I believe it may be useful to others who loathe the ... [More] ugly and slow Java GUI qmon that comes with the grid engine software and who find the huge list of jobs coming out of qstat to be painful. Usage of the tool is quite simple. Run it, it'll show all current jobs on the cluster. Hit "h" to get online-help or F10 to enter the menu. It can show, select, highlight, sort, kill, and modify jobs in your queue. fqstat was tested against a couple of versions of the grid engine software starting somewhere around 6.0. [Less]

2.41K lines of code

0 current contributors

over 9 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This
Licenses: Artistic_..., gpl

g-Eclipse

Compare

  Analyzed over 8 years ago

g-Eclipse is a framework that allows users and developers to access Computing Grids and Cloud Computing resources in a unified way. The framework itself is independent from a certain Grid middleware or Cloud Computing provider. The g-Eclipse project maintains a set of connectors to Grid ... [More] middlewares and provides an adapter to the Amazon webservices EC2 and S3. [Less]

175K lines of code

0 current contributors

over 9 years since last commit

1 users on Open Hub

Activity Not Available
5.0
 
I Use This

memx

Compare

  No analysis available

To INSTALL or Use MemX now, go to Usage The MemX system, developed in the Operating Systems and Networks (OSNET) Lab at Binghamton University, provides a mechanism to virtualize the collective memory resources of machines in a cluster with zero modifications to your application or operating ... [More] system. Features: Auto-discovery and pooling of unused memory of nodes across physical or virtual machine networks. Fully transparent, linux kernel-space implementation (2.6.XX). No centralized servers. No userland components. (aside from the application). Completely transparent access to memory clients (no application or OS modifications). Live client state transfers (allows you to shutdown a client and reattach a block device on a new host or virtual machine). Sort of like a self-migrating block device without VMs Live server shutdowns. Allows you to migrate server memory of individual hosts without disconnecting clients. (Self-dispersing servers). MemX uses about 6000 lines of kernel code for all of the above features. How it Works: In MemX, a linux module is inserted into the vanilla kernel of every participating machine in the cluster, turning each machine into either a client or a server. The client module implements an optional block device. (The client can also be accessed from within the kernel). This device can be used either for mmap(), for swapping, for filesystems. It's your choice. Node Discovery: Servers announce (broadcast) themselves and their load statistics to every machine in the cluster at regular intervals. These announcements allow clients to make decisions for page allocation and retrieval across all available servers. Clients accordingly accept block I/O requests, either from the local linux swap-daemon or the file system layer, and service the I/O requests from cluster-wide memory. Advantages of our approach: We get the benefits of a kernel-space implementation, without changing the virtual memory system. The network protocol stack is bypassed because we don't need routing, fragmentation, nor a transport layer. Cluster-wide memory is managed and virtualized across a low-latency, high-bandwidth interconnect. Depending on the state of the cluster, any client or server can be made to join or leave the system at will. Any workstation with a vanilla linux kernel can be used to run any legacy memory-intensive application. Previous Work: The following is a list of other remote memory systems that are similar in spirit but do not satisfy the goals that we do: (Refer to "How it Works" for explanation of terms used) Classification Name Test Platform Used Code Available Test Cluster Size Test Network Used Page-Fault Time or speedup Main Caveat Self-migration capable Currently Active MemX Linux 2.6. Yes 140 GB DRAM total across Twelve 8-core 2.5ghz Opterons Gigabit Ethernet 80 usec Needs replication Yes Current JumboMem Linux/Unix Yes 250 Nodes, 4GB each, 2 Ghz Opterons 4X Infiniband 54 usec Application changes to malloc() library call No Current Nswap Linux 2.6. No Eight 512MB nodes, Pentium 4 Gigabit Ethernet unsure None No Current LamdaRAM unsure No unsure Wide-Area-Networks 80 ms? unsure No 2003? "SAMSON" Linux 2.2 No Seven 933mhz Pentiums Myrinet or Ethernet 300 usec Kernel Changes No 2003 "Network Ramdisk" Linux 2.0 or Digital Unix 4.0 Yes 233mhz Pentium or DEC-Alpha 3000 155 Mbps ATM or 10 Mbps Ethernet 10-20 msec User-level Servers No 1999 "User-level VM" Nemesis No 200mhz Pentium nodes 10 Mbps Ethernet Several millisec Application is changed No 1999 Berkeley "NOW" GLUnix + Solaris Yes 105 Nodes, UltraSparc I 1.28 Gbps Myrinet unsure unsure No 1998 User-level "Dodo" Linux 2.0, Condor-based No 14 Nodes, 200mhz Pentium 100 Mbps Ethernet speedup of 2 to 3 Application is changed No 1998 "Global Memory System" Digital Unix 4.0 No 5-20 Nodes, 225 MHz Dec Alphas 1.28 Gbps Myrinet 370 microseconds Kernel VM Subsystem Changes No 1999 "Reliable Remote Memory Pager" DEC OSF/1 No 16? Nodes, DEC Alpha 3000 10 Mbps Ethernet several millisec, speedup of 2 User-level Servers, not linux-based No 1996 Publications MemX: Supporting Large Memory Applications in Xen Virtual Machines, SlidesIn Proc. of November, 2007 Second International Workshop on Virtualization Technology in Distributed Computing (VTDC07). A workshop in cunjunction with Super Computing 2007, Reno, Nevada. M. Hines and K. Gopalan [Less]

0 lines of code

0 current contributors

0 since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: mit

karaage

Compare

  Analyzed 44 minutes ago

Karaage is a cluster account management tool.

45.3K lines of code

4 current contributors

20 days since last commit

0 users on Open Hub

Moderate Activity
5.0
 
I Use This