0
I Use This!
Activity Not Available

Project Summary

To INSTALL or Use MemX now, go to Usage

The MemX system, developed in the Operating Systems and Networks (OSNET) Lab at Binghamton University, provides a mechanism to virtualize the collective memory resources of machines in a cluster with zero modifications to your application or operating system.

Features: Auto-discovery and pooling of unused memory of nodes across physical or virtual machine networks. Fully transparent, linux kernel-space implementation (2.6.XX). No centralized servers. No userland components. (aside from the application). Completely transparent access to memory clients (no application or OS modifications). Live client state transfers (allows you to shutdown a client and reattach a block device on a new host or virtual machine). Sort of like a self-migrating block device without VMs Live server shutdowns. Allows you to migrate server memory of individual hosts without disconnecting clients. (Self-dispersing servers).

MemX uses about 6000 lines of kernel code for all of the above features.

How it Works:

In MemX, a linux module is inserted into the vanilla kernel of every participating machine in the cluster, turning each machine into either a client or a server.

The client module implements an optional block device. (The client can also be accessed from within the kernel). This device can be used either for mmap(), for swapping, for filesystems. It's your choice.

Node Discovery: Servers announce (broadcast) themselves and their load statistics to every machine in the cluster at regular intervals. These announcements allow clients to make decisions for page allocation and retrieval across all available servers. Clients accordingly accept block I/O requests, either from the local linux swap-daemon or the file system layer, and service the I/O requests from cluster-wide memory.

Advantages of our approach: We get the benefits of a kernel-space implementation, without changing the virtual memory system. The network protocol stack is bypassed because we don't need routing, fragmentation, nor a transport layer. Cluster-wide memory is managed and virtualized across a low-latency, high-bandwidth interconnect. Depending on the state of the cluster, any client or server can be made to join or leave the system at will. Any workstation with a vanilla linux kernel can be used to run any legacy memory-intensive application.

Previous Work:

The following is a list of other remote memory systems that are similar in spirit but do not satisfy the goals that we do:

(Refer to "How it Works" for explanation of terms used)

Classification

Name
Test Platform
Used
Code Available
Test Cluster
Size
Test Network
Used
Page-Fault Time
or speedup
Main Caveat
Self-migration
capable
Currently
Active
MemX
Linux 2.6.
Yes
140 GB DRAM total across Twelve 8-core 2.5ghz Opterons
Gigabit Ethernet
80 usec
Needs replication
Yes
Current
JumboMem
Linux/Unix
Yes
250 Nodes, 4GB each, 2 Ghz Opterons
4X Infiniband
54 usec
Application changes to malloc() library call
No
Current
Nswap
Linux 2.6.
No
Eight 512MB nodes, Pentium 4
Gigabit Ethernet
unsure
None
No
Current
LamdaRAM
unsure
No
unsure
Wide-Area-Networks
80 ms?
unsure
No
2003?
"SAMSON"
Linux 2.2
No
Seven 933mhz Pentiums
Myrinet or Ethernet
300 usec
Kernel Changes
No
2003
"Network Ramdisk"
Linux 2.0 or Digital Unix 4.0
Yes
233mhz Pentium or DEC-Alpha 3000
155 Mbps ATM or 10 Mbps Ethernet
10-20 msec
User-level Servers
No
1999
"User-level VM"
Nemesis
No
200mhz Pentium nodes
10 Mbps Ethernet
Several millisec
Application is changed
No
1999
Berkeley "NOW"
GLUnix + Solaris
Yes
105 Nodes, UltraSparc I
1.28 Gbps Myrinet
unsure
unsure
No
1998
User-level "Dodo"
Linux 2.0, Condor-based
No
14 Nodes, 200mhz Pentium
100 Mbps Ethernet
speedup of 2 to 3
Application is changed
No
1998
"Global Memory System"
Digital Unix 4.0
No
5-20 Nodes, 225 MHz Dec Alphas
1.28 Gbps Myrinet
370 microseconds
Kernel VM Subsystem Changes
No
1999
"Reliable Remote Memory Pager"
DEC OSF/1
No
16? Nodes, DEC Alpha 3000
10 Mbps Ethernet
several millisec, speedup of 2
User-level Servers, not linux-based
No
1996

Publications

MemX: Supporting Large Memory Applications in Xen Virtual Machines, SlidesIn Proc. of November, 2007 Second International Workshop on Virtualization Technology in Distributed Computing (VTDC07). A workshop in cunjunction with Super Computing 2007, Reno, Nevada. M. Hines and K. Gopalan

Tags

clusters distributedsystems linux memx remotememory virtualmachines xen

In a Nutshell, memx...

 No code available to analyze

Open Hub computes statistics on FOSS projects by examining source code and commit history in source code management systems. This project has no code locations, and so Open Hub cannot perform this analysis

Is this project's source code hosted in a publicly available repository? Do you know the URL? If you do, click the button below and tell us so that Open Hub can generate statistics! It's fast and easy - try it and see!

Add a code location

MIT License
Permitted

Commercial Use

Modify

Distribute

Private Use

Sub-License

Forbidden

Hold Liable

Required

Include Copyright

Include License

These details are provided for information only. No information here is legal advice and should not be used as such.

All Licenses

This Project has No vulnerabilities Reported Against it

Did You Know...

  • ...
    Black Duck offers a free trial so you can discover if there are open source vulnerabilities in your code
  • ...
    check out hot projects on the Open Hub
  • ...
    55% of companies leverage OSS for production infrastructure
  • ...
    anyone with an Open Hub account can update a project's tags

 No code available to analyze

Open Hub computes statistics on FOSS projects by examining source code and commit history in source code management systems. This project has no code locations, and so Open Hub cannot perform this analysis

Is this project's source code hosted in a publicly available repository? Do you know the URL? If you do, click the button below and tell us so that Open Hub can generate statistics! It's fast and easy - try it and see!

Add a code location

Community Rating

Be the first to rate this project
Click to add your rating
   Spinner
Review this Project!
Sample ohloh analysis