Tags : Browse Projects

Select a tag to browse associated projects and drill deeper into the tag cloud.

LAPACK

Compare

  Analyzed 22 days ago

LAPACK provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are ... [More] related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision. Note: Ohloh's statistics are not accurate. LAPACK has dozens of major contributors, but most do not use any form of source control. Contributions are merged by only a few people, and then only in bursts. [Less]

1.94M lines of code

0 current contributors

over 3 years since last commit

11 users on Open Hub

Inactive
4.8
   
I Use This

Armadillo C++ Library

Compare

  Analyzed 2 months ago

Armadillo is a C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions. Various matrix decompositions are provided through optional ... [More] integration with LAPACK and ATLAS libraries. A delayed evaluation approach is employed (during compile time) to combine several operations into one and reduce (or eliminate) the need for temporaries. This is accomplished through recursive templates and template meta-programming. This library is useful if C++ has been decided as the language of choice (due to speed and/or integration capabilities), rather than another language like Matlab or Octave. [Less]

0 lines of code

0 current contributors

0 since last commit

6 users on Open Hub

Activity Not Available
5.0
 
I Use This
Mostly written in language not available
Licenses: mozilla_p...

BLAS (Basic Linear Algebra Subprograms)

Compare

  No analysis available

The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. The Level 1 BLAS perform scalar, vector and vector-vector operations, the Level 2 BLAS perform matrix-vector operations, and the Level 3 BLAS perform ... [More] matrix-matrix operations. Because the BLAS are efficient, portable, and widely available, they are commonly used in the development of high quality linear algebra software, LAPACK [Less]

0 lines of code

0 current contributors

0 since last commit

3 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: blas

jblas

Compare

  Analyzed 22 days ago

jblas is a fast linear algebra library for Java. jblas is based on BLAS and LAPACK, the de-facto industry standard for matrix computations, and uses state-of-the-art implementations like ATLAS for all its computational routines, making jBLAS very fast. jblas can is essentially a light-wight ... [More] wrapper around the BLAS and LAPACK routines. These packages have originated in the Fortran community which explains their often archaic API. On the other hand modern implementations are hard to beat performance wise. jblas aims to make this functionality available to Java programmers such that they do not have to worry about writing JNI interfaces and calling conventions of Fortran code. [Less]

20.4K lines of code

1 current contributors

11 months since last commit

2 users on Open Hub

Very Low Activity
5.0
 
I Use This

ScaLAPACK — Scalable Linear Algebra PACKage

Compare

  Analyzed 2 months ago

ScaLAPACK is a library of high-performance linear algebra routines for parallel distributed memory machines. ScaLAPACK solves dense and banded linear systems, least squares problems, eigenvalue problems, and singular value problems.

0 lines of code

0 current contributors

0 since last commit

1 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: BSD-3-Clause

tmv-cpp

Compare

  No analysis available

TMV is a linear algebra class library for C++ designed to combine an obvious, user-friendly interface with the speed of optimized libraries like BLAS and LAPACK.Note: I am still in the process of transferring the code from SourceForge over to Google Code. For now, please see the project page at ... [More] SourceForge for more information.Key features of TMV: Operator overloading: All matrix and vector arithmetic is written with the corresponding arithmetic operators. e.g. v2 = x * m * v1 calculates the product of a scalar x, a matrix m, and a vector v1, and stores the result in vector v2. This way of writing matrix arithmetic makes debugging the linear algebra statements in your code much easier than checking through all of the parameters in a BLAS or LAPACK call. Delayed evaluation: (aka lazy evaluation) The expression v2 = x * m * v1, for example, inlines directly to MultMV(x,m,v1,v2) which does the actual calculation, so there is no performance hit from the legibility of the operators. Templates: As the name TMV indicates, the type of the elements in a vector or matrix is a template. So you can have Matrix, Matrix, Matrix >, or even use some user-defined type (eg. Quad for some 16 byte quad-precision class) Matrix. Complex types: Mixing complex and real types in an arithmetic statement is legal. So in v2 = x * m * v1, it is fine for m and x to be real, and the vectors be complex, or any other mixing, so long as the object being assigned to is complex of course. (These statements are generally not possible in BLAS or LAPACK.) Multiple matrix shapes: In addition to dense square matrices, TMV supports upper and lower triangle matrices, diagonal matrices, banded matrices, symmetric matrices, hermitian matrices, and symmetric or hermitian banded matrices. The code uses the appropriate algorithms for each for increased efficiency. Generic sparse matrices are not yet supported, but we are planning on adding those in a future version. Matrix division: The expression x = b/A can be used to solve the matrix equation Ax = b. Control methods for A can be used to tell it which decomposition to use to find the solution. There are also controls to save the decomposition for later repeated use, and even to do the decomposition in place to save on storage. Decompositions: There are quite a few algorithms used to solve matrix equations, basically coming down to using different decompositions for the matrix. The different decompositions have different properties in terms of speed and stability. Therefore, in TMV, you may specify which kind of decomposition to use for the calculation of any division statement. The possible choices are LU, QR, QRP, Cholesky, Bunch-Kaufman, and SVD. You can also effect these decompositions directly, rather than as part of a division calculation. In addition, you can do polar decomposition and matrix square root as a direct decomposition (neither of which is useful for division). Views: There are both constant and mutable views into a vector or matrix. So expressions like m.row(3) += 4. * m.row(0) and m2 *= m.Transpose() do the obvious things. Alias checking: Many matrix packages calculate m *= m incorrectly. TMV automatically checks if two objects in a calculation use the same storage and creates temporaries as needed. It only checks the address of the first element, so you can still screw up. But most of the time this is good enough. Fast for large matrices: The code is designed to be fast for large matrices, employing many of the algorithms used by LAPACK to maximize the level 3 fraction of the calculation. In fact, some of the algorithms used by TMV are even faster than the current LAPACK algorithm (QRP decomposition and Bunch-Kaufman division, for example). Fast for small matrices: The code is also designed to be fast for small matrices. The best way to achieve that in TMV is to use the SmallMatrix class, which has the dimensions of the matrix given as template parameters. TMV is then able to inline a lot of the arithmetic rather than using a function call. And with the sizes known at compile time, the compiler can unroll the loops for even more speed. The result is that TMV is much faster than BLAS for small matrices. Flexible storage: A matrix may be declared either row-major or column-major. Band matrices also allow for diagonal-major storage. Flexible indexing: You can specify that you want to access a matrix using either the normal C convention (0-based indexing) or the Fortran convention (1-based indexing). BLAS/LAPACK: The library can be compiled to call BLAS and/or LAPACK routines. But if you don't have them, the internal code will also work. Most of the native TMV algorithms are as fast as LAPACK, and TMV's native arithmetic routines are competitive with BLAS (up to 30-50% slower at worst). But if speed is critical for you, and you are using fairly large matrices, then you should definitely compile with BLAS. [Less]

0 lines of code

0 current contributors

0 since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: gpl

Template Matrix/Vector Library for C++

Compare

  No analysis available

TMV is a comprehensive linear algebra library which uses operator overloading, views & delayed evaluation to simplify matrix and vector expressions in C++. It is well documented and can optionally call optimized BLAS/LAPACK for faster execution times.

0 lines of code

0 current contributors

0 since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: No declared licenses