0
I Use This!
Activity Not Available

Project Summary

TMV is a linear algebra class library for C++ designed to combine an obvious, user-friendly interface with the speed of optimized libraries like BLAS and LAPACK.Note: I am still in the process of transferring the code from SourceForge over to Google Code. For now, please see the project page at SourceForge for more information.Key features of TMV: Operator overloading: All matrix and vector arithmetic is written with the corresponding arithmetic operators. e.g. v2 = x * m * v1 calculates the product of a scalar x, a matrix m, and a vector v1, and stores the result in vector v2. This way of writing matrix arithmetic makes debugging the linear algebra statements in your code much easier than checking through all of the parameters in a BLAS or LAPACK call. Delayed evaluation: (aka lazy evaluation) The expression v2 = x * m * v1, for example, inlines directly to MultMV(x,m,v1,v2) which does the actual calculation, so there is no performance hit from the legibility of the operators. Templates: As the name TMV indicates, the type of the elements in a vector or matrix is a template. So you can have Matrix, Matrix, Matrix >, or even use some user-defined type (eg. Quad for some 16 byte quad-precision class) Matrix. Complex types: Mixing complex and real types in an arithmetic statement is legal. So in v2 = x * m * v1, it is fine for m and x to be real, and the vectors be complex, or any other mixing, so long as the object being assigned to is complex of course. (These statements are generally not possible in BLAS or LAPACK.) Multiple matrix shapes: In addition to dense square matrices, TMV supports upper and lower triangle matrices, diagonal matrices, banded matrices, symmetric matrices, hermitian matrices, and symmetric or hermitian banded matrices. The code uses the appropriate algorithms for each for increased efficiency. Generic sparse matrices are not yet supported, but we are planning on adding those in a future version. Matrix division: The expression x = b/A can be used to solve the matrix equation Ax = b. Control methods for A can be used to tell it which decomposition to use to find the solution. There are also controls to save the decomposition for later repeated use, and even to do the decomposition in place to save on storage. Decompositions: There are quite a few algorithms used to solve matrix equations, basically coming down to using different decompositions for the matrix. The different decompositions have different properties in terms of speed and stability. Therefore, in TMV, you may specify which kind of decomposition to use for the calculation of any division statement. The possible choices are LU, QR, QRP, Cholesky, Bunch-Kaufman, and SVD. You can also effect these decompositions directly, rather than as part of a division calculation. In addition, you can do polar decomposition and matrix square root as a direct decomposition (neither of which is useful for division). Views: There are both constant and mutable views into a vector or matrix. So expressions like m.row(3) += 4. * m.row(0) and m2 *= m.Transpose() do the obvious things. Alias checking: Many matrix packages calculate m *= m incorrectly. TMV automatically checks if two objects in a calculation use the same storage and creates temporaries as needed. It only checks the address of the first element, so you can still screw up. But most of the time this is good enough. Fast for large matrices: The code is designed to be fast for large matrices, employing many of the algorithms used by LAPACK to maximize the level 3 fraction of the calculation. In fact, some of the algorithms used by TMV are even faster than the current LAPACK algorithm (QRP decomposition and Bunch-Kaufman division, for example). Fast for small matrices: The code is also designed to be fast for small matrices. The best way to achieve that in TMV is to use the SmallMatrix class, which has the dimensions of the matrix given as template parameters. TMV is then able to inline a lot of the arithmetic rather than using a function call. And with the sizes known at compile time, the compiler can unroll the loops for even more speed. The result is that TMV is much faster than BLAS for small matrices. Flexible storage: A matrix may be declared either row-major or column-major. Band matrices also allow for diagonal-major storage. Flexible indexing: You can specify that you want to access a matrix using either the normal C convention (0-based indexing) or the Fortran convention (1-based indexing). BLAS/LAPACK: The library can be compiled to call BLAS and/or LAPACK routines. But if you don't have them, the internal code will also work. Most of the native TMV algorithms are as fast as LAPACK, and TMV's native arithmetic routines are competitive with BLAS (up to 30-50% slower at worst). But if speed is critical for you, and you are using fairly large matrices, then you should definitely compile with BLAS.

Tags

blas lapack linearalgebra matrix template vector

In a Nutshell, tmv-cpp...

 No code available to analyze

Open Hub computes statistics on FOSS projects by examining source code and commit history in source code management systems. This project has no code locations, and so Open Hub cannot perform this analysis

Is this project's source code hosted in a publicly available repository? Do you know the URL? If you do, click the button below and tell us so that Open Hub can generate statistics! It's fast and easy - try it and see!

Add a code location

GNU General Public License v2.0 or later
Permitted

Commercial Use

Modify

Distribute

Place Warranty

Forbidden

Sub-License

Hold Liable

Required

Include Copyright

Include License

Distribute Original

Disclose Source

State Changes

These details are provided for information only. No information here is legal advice and should not be used as such.

All Licenses

This Project has No vulnerabilities Reported Against it

Did You Know...

  • ...
    Black Duck offers a free trial so you can discover if there are open source vulnerabilities in your code
  • ...
    data presented on the Open Hub is available through our API
  • ...
    55% of companies leverage OSS for production infrastructure
  • ...
    you can embed statistics from Open Hub on your site

 No code available to analyze

Open Hub computes statistics on FOSS projects by examining source code and commit history in source code management systems. This project has no code locations, and so Open Hub cannot perform this analysis

Is this project's source code hosted in a publicly available repository? Do you know the URL? If you do, click the button below and tell us so that Open Hub can generate statistics! It's fast and easy - try it and see!

Add a code location

Community Rating

Be the first to rate this project
Click to add your rating
   Spinner
Review this Project!
Sample ohloh analysis