Tags : Browse Projects

Select a tag to browse associated projects and drill deeper into the tag cloud.

Ela, functional language

Compare

  Analyzed 24 days ago

Ela is a modern programming language that runs on CLR and Mono. The language is dynamically (and strongly) typed and comes with a rich and extensible type system out of box. It provides an extensive support for the functional programming paradigm including but not limited to - first class ... [More] functions, first class currying and composition, list/array comprehensions, pattern matching, polymorphic variants, thunks, etc. It also provides some imperative programming features. Ela supports both strict and non-strict evaluation but is strict by default. The current language implementation is a light-weight and efficient interpreter written fully in C#. The interpreter was designed to be embeddable and has a clear and straightforward API. [Less]

71.7K lines of code

1 current contributors

about 1 year since last commit

1 users on Open Hub

Very Low Activity
5.0
 
I Use This

memx

Compare

  No analysis available

To INSTALL or Use MemX now, go to Usage The MemX system, developed in the Operating Systems and Networks (OSNET) Lab at Binghamton University, provides a mechanism to virtualize the collective memory resources of machines in a cluster with zero modifications to your application or operating ... [More] system. Features: Auto-discovery and pooling of unused memory of nodes across physical or virtual machine networks. Fully transparent, linux kernel-space implementation (2.6.XX). No centralized servers. No userland components. (aside from the application). Completely transparent access to memory clients (no application or OS modifications). Live client state transfers (allows you to shutdown a client and reattach a block device on a new host or virtual machine). Sort of like a self-migrating block device without VMs Live server shutdowns. Allows you to migrate server memory of individual hosts without disconnecting clients. (Self-dispersing servers). MemX uses about 6000 lines of kernel code for all of the above features. How it Works: In MemX, a linux module is inserted into the vanilla kernel of every participating machine in the cluster, turning each machine into either a client or a server. The client module implements an optional block device. (The client can also be accessed from within the kernel). This device can be used either for mmap(), for swapping, for filesystems. It's your choice. Node Discovery: Servers announce (broadcast) themselves and their load statistics to every machine in the cluster at regular intervals. These announcements allow clients to make decisions for page allocation and retrieval across all available servers. Clients accordingly accept block I/O requests, either from the local linux swap-daemon or the file system layer, and service the I/O requests from cluster-wide memory. Advantages of our approach: We get the benefits of a kernel-space implementation, without changing the virtual memory system. The network protocol stack is bypassed because we don't need routing, fragmentation, nor a transport layer. Cluster-wide memory is managed and virtualized across a low-latency, high-bandwidth interconnect. Depending on the state of the cluster, any client or server can be made to join or leave the system at will. Any workstation with a vanilla linux kernel can be used to run any legacy memory-intensive application. Previous Work: The following is a list of other remote memory systems that are similar in spirit but do not satisfy the goals that we do: (Refer to "How it Works" for explanation of terms used) Classification Name Test Platform Used Code Available Test Cluster Size Test Network Used Page-Fault Time or speedup Main Caveat Self-migration capable Currently Active MemX Linux 2.6. Yes 140 GB DRAM total across Twelve 8-core 2.5ghz Opterons Gigabit Ethernet 80 usec Needs replication Yes Current JumboMem Linux/Unix Yes 250 Nodes, 4GB each, 2 Ghz Opterons 4X Infiniband 54 usec Application changes to malloc() library call No Current Nswap Linux 2.6. No Eight 512MB nodes, Pentium 4 Gigabit Ethernet unsure None No Current LamdaRAM unsure No unsure Wide-Area-Networks 80 ms? unsure No 2003? "SAMSON" Linux 2.2 No Seven 933mhz Pentiums Myrinet or Ethernet 300 usec Kernel Changes No 2003 "Network Ramdisk" Linux 2.0 or Digital Unix 4.0 Yes 233mhz Pentium or DEC-Alpha 3000 155 Mbps ATM or 10 Mbps Ethernet 10-20 msec User-level Servers No 1999 "User-level VM" Nemesis No 200mhz Pentium nodes 10 Mbps Ethernet Several millisec Application is changed No 1999 Berkeley "NOW" GLUnix + Solaris Yes 105 Nodes, UltraSparc I 1.28 Gbps Myrinet unsure unsure No 1998 User-level "Dodo" Linux 2.0, Condor-based No 14 Nodes, 200mhz Pentium 100 Mbps Ethernet speedup of 2 to 3 Application is changed No 1998 "Global Memory System" Digital Unix 4.0 No 5-20 Nodes, 225 MHz Dec Alphas 1.28 Gbps Myrinet 370 microseconds Kernel VM Subsystem Changes No 1999 "Reliable Remote Memory Pager" DEC OSF/1 No 16? Nodes, DEC Alpha 3000 10 Mbps Ethernet several millisec, speedup of 2 User-level Servers, not linux-based No 1996 Publications MemX: Supporting Large Memory Applications in Xen Virtual Machines, SlidesIn Proc. of November, 2007 Second International Workshop on Virtualization Technology in Distributed Computing (VTDC07). A workshop in cunjunction with Super Computing 2007, Reno, Nevada. M. Hines and K. Gopalan [Less]

0 lines of code

0 current contributors

0 since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: MIT

Managed Compiler Infrastructure

Compare

  Analyzed 18 days ago

The Managed Compiler Infrastructure is a modern and intuitive compiler back end for managed languages.

24.2K lines of code

0 current contributors

over 4 years since last commit

0 users on Open Hub

Inactive
0.0
 
I Use This