I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected 5 months ago.
Posted almost 5 years ago by [email protected] (n4js dev)
Short-circuit evaluation is a popular feature of many programming languages and also part of N4JS. In this post, we show how the control-flow analysis of the N4JS-IDE deals with short-circuit evaluation, since it can have a substantial effect on the ... [More] data flow and execution of a program.Short circuit evaluation is a means to improve runtime performance when evaluating boolean expressions. This improvement is a result of skipping code execution. The example above shows an if-statement whose condition consists of two boolean expressions that combine the values of 1, 2 and 3, and its control flow graph. Note that the number literals are placeholders for more meaningful subexpressions.First the logical and, then the logical or gets evaluated: (1 && 2) || 3. In case the expression 1 && 2 evaluates to true, the evaluation of the subclause 3 will be skipped and the evaluation of the entire condition results to true. This skipping of nested boolean expressions is called short circuit evaluation.However, instead of skipping expression 3, expression 2 might be skipped. In case condition 1 does not hold, the control flow will continue with condition 3 right away. This control flow completely takes places within the if-condition, whereas the former short circuit targets the then block.The reasoning behind short circuit evaluation is that the skipped code does not affect the result of the whole boolean expression. If the left hand side of the logical or expression evaluates to true, the whole or expression also does. Only if the left hand side is false, the right hand side will be evaluated. Complementary, the right hand side of a logical and expression is skipped in case the left hand side evaluates to false.Side EffectsRisks of short circuit evaluation might arise in case a subexpression has side effects: These side effects will not occur if the subexpression is skipped. However, a program that relies on side effects of expressions inside an if-condition can be called fragile (or adventurous). In any case it is recommended to write side-effect free conditions.Have a look at the example above. In case variable i has a value of zero, the right hand side expression i++ is executed, otherwise, it is skipped. The side effect here is the post-increment the value of i. If the value of i is other than zero, this value will be printed out. Otherwise, the value will be incremented but not printed. The control flow shows this behavior with the edge starting at i and targeting the symbol console.LoopsLoop conditions also benefit from short circuit evaluation. This is important to know when reasoning about the all possible control flow paths through the loop: Each short circuit will introduce another path. Combining all of them makes data flow in loops difficult to understand in case of side effects in the subconditions.Creative use of short circuit evaluation Misusing short circuit evaluation can mimic if-statements by using expressions but without using the language feature of conditional expressions (i.e. condition() ? then() : else()). This could be used when if-statements should be executed e.g. when passing arguments to method calls, or when computing the update part of for-loops.The picture above shows the two versions: the first uses an if-statement and the second uses an  expression statement. These two statements call the functions condition, then and end. Depending on the return value of condition, the function then is executed or not. Consequently, the printouts are either "condition then end" or "condition end", depending on the control flow.The corresponding control flows are depicted on the right: The upper three lines refer to the if-statement, and the lower three lines to the expression statement. They reveal that the expression statement behaves similar to the if-statement. Note that the control flow edge in the last line that skips the nodes end and end() is never traversed since the logical or expression always evaluates to true.The interested reader would find more details about the N4JS flow graphs and their implementation in the N4JS Design Document, Chapter: Flow Graphs. by Marcus Mews [Less]
Posted almost 5 years ago by [email protected] (n4js dev)
The N4JS IDE integrates validations and analyses that are quite common for IDEs of statically typed languages. However, these analyses are seldom available for dynamically typed languages like N4JS or TypeScript. In this post we present the ... [More] null/undefined analysis for N4JS source code.TypeError: Cannot read property of undefined- Developer's staff of lifeThe runtime error above occurs pretty often for JavaScript programmers: A quick search on Google returned about 1.2 million for the term TypeError: Cannot read property of undefined. When constraining search results to site:stackoverflow.com the query will still yield 126 thousand results. These numbers are comparable to the somewhat similar error NullPointerException which has about 3 million hits on Google and about 525 thousand when constrained to stackoverflow.com. Some of these results are caused by rather simple mistakes that a null/undefined analysis could detect. As a result, the developer could restructure his code and remove these potential errors even before he runs his first test and hence save time.Null/Undefined AnalysisThe N4JS IDE provides static analyses to indicate problems when it detects a property access on a variable which can be null or undefined. The analysis considers all assignments that occur either through a simple assignment expression or via destructuring. Loops, conditional expressions (e.g. i = c? 1 : 0;) and declaration initializers are respected as well.The screenshot above shows a first example where a potential TypeError is detected. Since there exists at least one control flow from v.length backwards to all reachable definitions of v,  such that one definition assigns null or undefined to v, a warning is issued telling that v may be undefined.To make sure that the analysis will produce fast results, it is implemented within some limitations. One is that the analysis is done separately for each body of a function, method, etc. (i.e. intra-procedural analysis). Hence it lacks knowledge of variables that cross the borders of these bodies such as the return value of a nested function call. In addition, property variables (such as v.length) are not analyzed since this would require the analysis to be context sensitive to the receiver object (here v). However, these limitations are common for static analyses of statically typed languages and still allow to detect many problems regarding local variables and parameters.Usually, the analysis makes optimistic assumptions. For instance it can happen that a local variable receives the value of a method call or another non-local variable. In this situation the analysis assumes this value is neither null nor undefined. The same is true for function parameters. Only when there are distinct indications in the source code for a value of a local variable to be null or undefined, the analysis will issue a warning.Guards Sometimes the programmer knows that a variable may be null or undefined and hence checks the variable explicitly, for instance using if (v) {...}. As a result this check disables the warning in the then-branch that complies to the execution semantics.As shown in the screenshot above, neither at the expression w.length < 1 nor at the statement return w.length; a warning is shown. Of course, the else-branch of such a check would consequently always indicate a warning when a property of variable v is accessed. Checks for conditional expressions and binary logical expressions (e.g. v && v.length) are also supported. A reader might think: "In case w is null the expression w.length would fail." True, but in this example the analysis detects the value of w being undefined. In case null might have been assigned to w e.g. in an if-condition before, the analysis will issue a warning of w being null at the two w.length expressions.Data FlowThere are situations where the value of a variable is null or undefined due to a previous assignment of a variable which may have been null or undefined before, like shown in the example above. Then, the null/undefined dereference problem occurs later when a property is accessed. Since the analysis respects data flow, it can follow the subsequent assignments. Hence a warning is shown at a property access indicating the null or undefined problem. Moreover, the warning also indicates the source of the null or undefined value which would be the variable w in the example above.The interested reader would find more details about the N4JS flow graphs and their implementation in the N4JS Design Document, Chapter: Flow Graphs. by Marcus Mews [Less]
Posted almost 5 years ago
The community really came though for the early-bird deadline this year. The program committee reviewed a record number of talks (144) to come up with a top-six list. Congratulations to the speakers chosen for early acceptance! And remember, the ... [More] final deadline is Monday, July 15, so there's still lots of time to submit your proposal. Programming for Accessibility - Rory Preddy Quarkus the Shrink Ray to Your Cloud Native Java Applications - Kamesh Sampath When Your Happy Dreams Are About Dying - Zak Greant What GraalVM Means for the Eclipse IDE - Martin Lippert A Tale of Rust, the ESP32 and IoT - Jens Reimann OSGi CDI Integration Specification - Raymond Auge [Less]
Posted almost 5 years ago
Between last exams and our first pull requests on GitHub. First, lets get the most important thing out of the way: We got our first pull request on GitHub 🎉. Thanks to GitHub user Lakshminarayana Nekkanti for having enough interest in the project ... [More] to fix a few bugs and raise some issues. More on that down below. And what have I been up to these past weeks? I still had to finish some exams in university before I could focus a 100% on Dartboard. Nevertheless I still had some time to fix a few bugs and review the PRs on GitHub. First Testing While developing the first features of the plugin, manual testing couldn't really be done by more people than me. In the past week however Lars (@vogella) took some time to extensively test the existing features. In doing so he discovered some issues with the user experience, especially in the first steps a new user needs to take get up and running. Bug Fixes - Plugin Executing a Dart file was done in the main thread which caused the IDE to freeze completely during the execution of the program. This was problematic for programs that took longer to execute or continuously running programs like webservers or a Flutter app for example. See eclipse/dartboard#50. Errors in the Dart program output are printed to the error stream of a process by default. But our Dart terminal only showed the normal input stream coming from the Dart process. To circumvent this I added the error stream to the Dart terminal. Any errors that happen during compilation or at runtime are now printed to the terminal and shown in red. See eclipse/dartboard#44. Lakshminarayana Nekkanti added the ability to use the New context menu of eclipse to create Dart files and projects. He also added validation of the entered Dart SDK on the preference page, while the user types a path in. This results in instant feedback when modifying the Dart SDK location. Again, thank you Lakshminarayana! Bug Fixes - LSP4E Whenever a file is saved, the LSP4E plugin sends a notification to the language server, that the file was saved. This is useful for language servers that need to execute a task upon saving of a file (for example recompiling or analyzing the file). The Dart analysis server however, does not need this notification and does advertise this at the first connection with a client. LSP4E ignored this until now and sent the notification anyways, resulting in an error raised by the analysis server which was also shown to the user. I added a check to LSP4E that checks if the language server actually supports the notification and only sends it if it does. This resolved the error in the analysis server. See Bug 548210. Theming The default theme of TM4E is rather light and a little hard to read. Thus we changed the theme that is used by default in Dartboard to another variant of the Eclipse editor theme. The new theme does not seem to be quite finished yet. It will require some polishing in the TM4E project. Wrap up That's it for the past two weeks. Even though I still had to finish some exams in university I could use the project as some alternation to studying which was very refreshing. Also it's awesome to see interest from all over the world by fixing bugs and submitting issues. Links GitHub Eclipse Foundation Project Page Dart and the related logo are trademarks of Google LLC. We are not endorsed by or affiliated with Google LLC. [Less]
Posted almost 5 years ago
With the end of the Eclipse PolarSys adventure comes that of the Papyrus IC. In the end, we could not maintain the momentum to move forward. We failed to grow our community, and in doing so, we failed our community. But this is not the end ... [More] of Papyrus! Not by a long shot! Papyrus is more vibrant than ever. New variants are still being build, e.g., Papyrus UMLLight, new released are still provided with continued improvements, and a new major release is in the plan for the project. As well, many companies, research groups, schools, and individuals are still teaching with it, working with it, improving it. A personal shout out to EclipseSource employees and Queens University’s faculty and students for their dedication to the Papyrus products! For the time being, this blog will remain a beacon of light for Papyrus . But here will be a time where it will have to close. Other endeavours await the author. If anyone one want to help or take it over, please let me know [Less]
Posted almost 5 years ago
With the end of the Eclipse PolarSys adventure comes that of the Papyrus IC. In the end, we could not maintain the momentum to move forward. We failed to grow our community, and in doing so, we failed our community. But this is not the end ... [More] of Papyrus! Not by a long shot! Papyrus is more vibrant than ever. New variants are still being build, e.g., Papyrus UMLLight, new released are still provided with continued improvements, and a new major release is in the plan for the project. As well, many companies, research groups, schools, and individuals are still teaching with it, working with it, improving it. A personal shout out to EclipseSource employees and Queens University’s faculty and students for their dedication to the Papyrus products and to Francis, our glorious leader for his perseverance.! For the time being, this blog will remain a beacon of light for Papyrus . But here will be a time where it will have to close. Other endeavours await the author. If anyone one want to help or take it over, please let me know [Less]
Posted almost 5 years ago
A Bit of HistoryWhen I joined the Eclipse CDT project back in 2002 (yeah, it’s been a long time), I was working on modeling tools for “real time”, or more accurately, embedded reactive systems. Communicating state machines. I wrote code generators ... [More] that generated C and C++ from ROOM models and then eventually UML-RT. ROOM was way better by the way and easier to generate for because it was more semantically complete and well defined. That objective is key later in this story.We had the vision to integrate our modeling tools more closely with Integrated Development Environments. We started looking at Visual Studio but Eclipse was the young up and comer. That and IBM bought us, Rational by that point, and had already bought OTI who built Eclipse so it was a natural fit. And we were all in Ottawa. And by chance, Ottawa-based QNX had already written a C/C++ IDE based on Eclipse and were open sourcing it and it was perfect for our customers as well. It’s amazing how that all happened and led to my life as CDT Doug.Our first order of business was to help the CDT become an industry class C/C++ IDE and become a foundation for integrating our modeling tools. Since we wanted to be able to generate model elements from code, it required we have accurate C and C++ parsers and indexers. No one figured we could do it, but we were able to put together a somewhat decent system written in Java in the org.eclipse.cdt.core plug-in.Scaling is HardHowever, as the community started to try it out on real projects, especially ones of a significant size, we started to run into pretty massive performance problems with the indexer. We were essentially doing full builds of the user’s projects and storing the results in a string table. On large projects, builds take a long time. But users expect that and put up with it because they really need those binaries it produces. They don’t have the same patience for their IDEs building indexes the don’t really see and we paid a pretty high price for that.As a solution, I wondered if we could store the symbol information that we were gathering in a way that we could load it up from disk as we were parsing other files and plug the symbol info into the AST the same way we do symbols normally. This would allow us to parse header files once and reuse the results, similar to how precompiled headers work. The price you pay is in accuracy since some systems parse header files multiple times with different macro settings. But my guess was that it wouldn’t be that bad.It was hard to convince my team at IBM Rational to take this road. Accuracy was king for our modeling tools. But when I moved to join QNX, I decide to forgo that requirement and give this “fast indexer” strategy a go. And the rest is history. Performance on large projects was an order of magnitude faster. Incremental indexing of files as they were saved isn’t even noticeable. It was a huge success and my proudest contribution to the CDT. And I was even better when other community members handed us their expertise to make the accuracy better and better so you barely notice that at all either.C++ Rises from the “Dead”Move the clock a decade later and we started running into a problem. The C++ standards community has new life and are adding a tonne of new features at a three year cadence. The CDT community has long lost most of the experts that build the original parsers. Lucky for us a new crop of contributors has come along and are doing heroes work to keep up. But it’s getting harder and harder. One thing we benefit from is how slow embedded developers, the majority of users of CDT, are to adopt the new standards. It gives us time, but not forever. We need to find a better way.Then along came the Language Server Protocol and a small handful of language servers that do C/C++. I’ve investigated four of them. Three of them are based on llvm and clang. One of them is in tree with llvm and clang in clang-tools-extra, i.e., clangd. The other two are projects that use libclang with parts of the tree, i.e., cquery and ccls. Those two projects are what I call “one person projects” and with cquery at least, that person found something else to do last November. Beware of the one person project.clangdI’ve spent a lot of time with clangd when experimenting with Visual Studio Code. For what it does, clangd is very accurate and really fast. It uses compile_commands.json files to find out what source files are built and what compiler and command lines they use. I’ve had to fork the tree to add in support for discovering compilers it doesn’t know about, but that was pretty easy to put together. It gives great content assist and you get the benefit of clang’s awesome compilation error diagnostics as you type. It shows a lot of promise.However clangd for the longest time lacked an indexer. When you search for references it only finds them in files you have opened previously. The thought as I understand it is that you use another process to build the index and that is usually done at build time. However, not all users have such an environment set up so having an index created by the IDE is a mandatory feature. Now, clangd did eventually get an indexer but it does what the old CDT indexer did and completely parses the source three. That predictably takes forever on large projects and I don’t think users have the appetite to take a huge step backwards like that.IntelliSenseWhile waiting for the right solution to arrive for clangd, I thought I’d give the Microsoft C/C++ Tools for VS Code a try. My initial experience was quite surprising. It actually worked well with a gnu tools cross compiler project I used for testing. You have to teach it how to parse your code using a magic JSON file, which fits right in with the rest of VS Code. It’s able to pick out the default include path when you point it at your compiler. It has a MI support for debugging, though no built-in support for remote debugging but that was hackable. It seemed like a reasonable alternative, at least for VS Code.However when I tried it with one of our production projects it quickly fell apart. It does a great job trying to figure out include paths, similar to the heuristics we use in CDT. That includes things like treating all the folders in your workspace as a potential include path entry. But it tended to make mistakes. It even has support for compile_commands.json files so I could tell it the command lines that were use. It did better but still tried to do too much and gave incorrect results.That and it doesn’t have an index yet either. One is coming soon, but if it can’t figure out how to parse my files correctly, it’s not going to be a great experience. Still a lot of work to do there.Where do we go from here?As it stands today, at least from a CDT perspective, there really isn’t a language server solution that comes near what we have in CDT. Yes, some things are better. Both these language servers are using real parsers to parse the code. (or at least clangd is. Microsoft’s, of course, is closed source so I can only assume). They give really good content assist and error diagnostics and open declaration works. But without a usable indexer, you don’t get accurate symbol references. And I haven’t even mentioned refactoring which CDT has and which is not even suggested in the language server protocol.So if all your doing is typing in code, the new language servers are great. But if you need to do some code mining to understand the code before you change it, you’re out of luck. The good news is that we are continuing to see investment in them so who knows. But then, maybe the CDT parsers catch up with the language standards before these other language servers grow great indexers making the whole thing moot. I wouldn’t bet against that right now. [Less]
Posted almost 5 years ago
Welcome to the second issue of the Eclipse IoT Newsletter for 2019, a newsletter tailored to share Eclipse IoT community and industry news.
Posted almost 5 years ago
This month the Eclipse Newsletter features the Eclipse IDE 2019-06, which is now available for download!
Posted almost 5 years ago
Last week, Eclipse 2019-06 has been released, a new version of the Eclipse IDE and platform. The first notable improvement is...The post EclipseSource Oomph Profile – updated to 2019-06 appeared first on EclipseSource.