Forums : Feedback Forum

Dear Open Hub Users,

We’re excited to announce that we will be moving the Open Hub Forum to https://community.synopsys.com/s/black-duck-open-hub. Beginning immediately, users can head over, register, get technical help and discuss issue pertinent to the Open Hub. Registered users can also subscribe to Open Hub announcements here.


On May 1, 2020, we will be freezing https://www.openhub.net/forums and users will not be able to create new discussions. If you have any questions and concerns, please email us at [email protected]

Request for a killer feature

First off great service guys, you're building some very useful tools.

I love the comment ratio functionality, as it can give some implicit pressure to people to put more comments in their code, both to improve their own ratio and to get the project classified as a well commented code base.

An absolutely killer feature along those lines would be your testing ratio. We sometimes use tools like clover that report what percentage of the code is covered by tests. It will even report at the module level, and return exactly what parts of the code get tested. But the thing it lacks (afaik) is links to the repository - whose code has lots of tests, whose code is lacking. This would be great for putting friendly pressure on people to get their code better tested.

Yes, I realize that this might be really hard to do, since you'd probably need a different tool for each language, and some languages may not already have tools like this. And you'd have to successfully run all the tests, get the config just right.

So I realize this is basically a dream. But just thought I'd let you all know where my dreams for this stuff lay. And to give me an opportunity to thank you for a great service.

Chris Holmes over 17 years ago
 

Hi chomie,

Yes, this sort of thing is right up our alley, and as time and technology allows we would love to put something like this in.

If you start getting down to the nitty gritty of figuring out code coverage, yes, that's a very hard problem. Simply identifying test code is a tricky problem.

Ohloh never actually tries to build or execute any of the code -- I can't imagine how much trouble that would cause us. I believe (correct me if i'm wrong -- I might look really stupid here) that tools like clover actually execute the tests and trap method calls to determine the coverage. Our system so far simply parses source code to do its work.

One simple thing we might be able to do is to simply allow project admins to specify some regular expressions against filenames to label chunks of code as testing code. Although this doesn't correllate tests to their target functions and doesn't measure coverage, it might be good enough for some projects, and at least it lets you see who's been editing the test code and who hasn't. Of course this doesn't work for projects which mix test code inline with the code it tests.

Thanks,
Robin

Robin Luckey over 17 years ago