Very High Activity
I Use This!

News

Analyzed 4 days ago. based on code collected 4 days ago.
Posted about 2 hours ago
Yesterday I blogged about my endeavors into loading and using the TypeScript Language Service (V8 and Nashorn) and calling it from Java to get things like an outline, auto-completions, … . Today I connected these headless pieces to my ... [More] JavaFX-Editor-Framework. The result can be seen in the video below. To get the TypeScript LanguageService feeling responsible not only for TypeScript files but also for JavaScript I used the 1.8 beta. As you notice JS-Support is not yet really at a stage where it can replace eg Tern but I guess things are going to improve in future. [Less]
Posted about 9 hours ago
This post explains the JSDT projects structure, and it is the result of my direct experience. This page serves also as part discussion for JSDT developments. Kudos -all to those who comment and leave constructive feedback: here and on JSDT bugzilla. ... [More] [486037, 477020] By reading this article you will be able to understand where the JSDT projects are located; which are the related git repositories and how to get the source code to work with one of those projects. i.e. JSDT Editor; Nodejs; Bower, JSON Editor, Gulp, Grunt, HTML>js, JSP>js , etc. JSDT Repositories The image below represents the current structure of JSDT repositories. Almost all of the links to the above source code repositories are accessible via the https://projects.eclipse.org/projects/webtools/developer page. Description: eclipse.platform.runtime : [, ] source repo required for quitting some IDE validation at compile-time. webtools : [] contains the website for all the webtools project. It’s big, needed to update project page webtools.jsdt : [,, ] source repo containing the most updated code for JSDT webtools.jsdt.[core, debug, tests]: old source repos containing outdated code (last commit: 2013) webtools.sourceediting: [, ] source repo for JSDT Web and JSON Note: the Gerrit [] icons are linking to the repos accepting Gerrit contributions, so anybody can easily contribute. Early Project Structure According older documentation, JSDT was split in four areas: Core, Debug, Tests and Web. The source of the first three was directly acessible under project source control, while the latter, because of its wider extent, was part of the parent project. Dissecting the old jsdt_2010.psf, we see the original project structure. Current Project Structure The current project structure is based on the old structure, but it has additional projects. To simplify I split the project in four sets: JSDT Core, Debug, Docs (& Tests): under the webtools.jsdt source repositories contains similar data to the old project. JSDT.js : it is also under the webtools.jsdt source repo, but contains the Nodejs stuffs. wst.json : under the webtools.sourceediting, contains the projects needed to parse / edit JSON wst.jsdt.web : also under the webtools.sourcediting repo, contains the projects to include JSDT in Web editors The image below represents simultaneously all the above project sets, as visible in my workspace. A complete Project Set Here you can find the complete projectset, containing the four projectsets above, plus the Platform dependencies, and the webtools project. wst.jsdt.allProjects_20160209.psf After importing, you should see the project sets below. The full list of projects in my workspace is visible in the image below. JSDT Development At this point, to start with JSDT development, you will need to: clone the needed repositories to your local setup the development environment, as explained in my previous article. Import the referenced projectset Launch the inner eclipse with the source plugins you want Note: Your comments and suggestions are very welcome. Thanks for your feedback ! References: eclipse.org/webtools/jsdt/: JavaScript Development Tools (JSDT) page at Eclipse wiki.eclipse.org/JSDT wiki and instructions for Development. projects.eclipse.org/projects/webtools.jsdt : JSDT Project and developer resources projects.eclipse.org/projects/webtools/developer Webtools developer resources [Less]
Posted about 12 hours ago
In the last weeks I needed to look at several issues regarding OSGi dependencies in different products. A lot of these issues were IMHO related to wrong usage of OSGi bundle fragments. As I needed to search for various solutions, I will publish my ... [More] results and my opinion on the usage of fragments in this post. Partly also for myself to remind me about it in the future. What is a fragment? As explained in the OSGi Wiki, a fragment is a bundle that makes its contents available to another bundle. And most importantly, a fragment and its host bundle share the same classloader. Looking at this from a more abstract point of view, a fragment is an extension to an existing bundle. This might be a simplified statement. But considering this statement helped me solving several issues. What are fragments used for? I have seen a lot of different usage scenarios for fragments. Considering the above statement, some of them where wrong by design. But before explaining when not to use fragments, let’s look when they are the agent of choice. Basically fragments need to be used whenever a resource needs to be accessible by the classloader of the host bundle. There are several use cases for that, most of them rely on technologies and patterns that are based on standard Java. For example: Add configuration files to a third-party-plugin e.g. provide the logging configuration (log4j.xml for the org.apache.log4j bundle) Add new language files for a resource bundle e.g. a properties file for locale fr_FR that needs to be located next to the other properties files by specification Add classes that need to be dynamically loaded by a framework e.g. provide a custom logging appender Provide native code This can be done in several ways, but more on that shortly. In short: fragments are used to customize a bundle When are fragments the wrong agent of choice? To explain this we will look at the different ways to provide native code as an example. One way is to use the Bundle-NativeCode manifest header. This way the native code for all environments are packaged in the same bundle. So no fragments here, but sometimes not easy to setup. At least I struggled with this approach some years ago. A more common approach is to use fragments. For every supported platform there is a corresponding fragment that contains the platform specific native library. The host bundle on the other side typically contains the Java code that loads the native library and provides the interface to access it (e.g. via JNI). This scenario is IMHO a good example for using fragments to provide native code. The fragment only extend the host bundle without exposing something public. Another approach is the SWT approach. The difference to the above scenario is, that the host bundle org.eclipse.swt is an almost empty bundle that only contains the OSGi meta-information in the MANIFEST.MF. The native libraries aswell as the corresponding Java code is supplied via platform dependent fragments. Although SWT is often referred as reference for dealing with native libraries in OSGi, I think that approach is wrong. To elaborate why I think the approach org.eclipse.swt is using is wrong, we will have a look at a small example. Create a host bundle in Eclipse via File -> New -> Plug-in Project and name it org.fipro.host. Ensure to not creating an Activator or anything else. Create a fragment for that host bundle via File -> New -> Other -> Plug-in Development -> Fragment Project and name it org.fipro.host.fragment. Specify the host bundle org.fipro.host on the second wizard page. Create the package org.fipro.host in the fragment project. Create the following simple class (yes, it has nothing to do with native code in fragments, but it also shows the issues). package org.fipro.host; public class MyHelper { public static void doSomething() { System.out.println("do something"); } } So far, so good. Now let’s consume the helper class. Create a new bundle via File -> New -> Plug-in Project and name it org.fipro.consumer. This time let the wizard create an Activator. In Activator#start(BundleContext) try to call MyHelper#doSomething() Now the fun begins. Of course MyHelper can not be resolved at this time. We first need to make the package consumable in OSGi. This can be done in the fragment or the host bundle. I personally tend to configure Export-Package in the bundle/fragment where the package is located. We therefore add the Export-Package manifest header to the fragment. To do this open the file org.fipro.host.fragment/META-INF/MANIFEST.MF. Switch to the Runtime tab and click Add… to add the package org.fipro.host. Note: As a fragment is an extension to a bundle, you can also specify the Export-Package header for org.fipro.host in the host bundle org.fipro.host. org.eclipse.swt is configured this way. But notice that the fragment packages are not automatically resolved using the PDE Manifest Editor and you need to add the manifest header manually. After that the package org.fipro.host can be consumed by other bundles. Open the file org.fipro.consumer/META-INF/MANIFEST.MF and switch to the Dependencies tab. At this time it doesn’t matter if you use Required Plug-ins or Imported Packages. Although Import-Package should be always the preferred way, as we will see shortly. Althought the manifest headers are configured correctly, the MyHelper class can not be resolved. The reason for this is PDE tooling. It needs additional information to construct proper class paths for building. This can be done by adding the following line to the manifest file of org.fipro.host Eclipse-ExtensibleAPI: true After this additional header is added, the compilation errors are gone. Note: This additional manifest header is not necessary and not used at runtime. At runtime a fragment is always allowed to add additional packages, classes and resources to the API of the host. After the compilation errors are gone in our workspace and the application runs fine, let’s try to build it using Maven Tycho. I don’t want to walk through the whole process of setting up a Tycho build. So let’s simply assume you have a running Tycho build and include the three projects to that build. Using POM-less Tycho this simply means to add the three projects to the modules section of the build. You can find further information on Tycho here: Eclipse Tycho for building Eclipse Plug-ins and RCP applications POM-less Tycho builds for structured environments Running the build will fail because of a Compilation failure. The Activator class does not compile because the import org.fipro.host cannot be resolved. Similar to PDE, Tycho is not aware of the build dependency to the fragment. This can be solved by adding an extra. entry to the build.properties of the org.fipro.consumer project. extra.. = platform:/fragment/org.fipro.host.fragment See the Plug-in Development Environment Guide for further information about build configuration. After that entry was added to the build.properties of the consumer bundle, also the Tycho build succeeds. What is wrong with the above? At first sight it is quite obvious what is wrong with the above solution. You need to configure the tooling at several places to make the compilation and the build work. These workarounds even introduce dependencies where there shouldn’t be any. In the above example this might be not a big issue, but think about platform dependent fragments. Do you really want to configure a build dependency to a win32.win32.x86 fragment on the consumer side? The above scenario even introduces issues for installations with p2. Using the empty host with implementations in the fragments forces you to ensure that at least (or exactly) one fragment is installed together with the host. Which is another workaround in my opinion (see Bug 361901 for further information). OSGi purists will say that the main issue is located in PDE tooling and Tycho, because the build dependencies are kept as close as possible to the runtime dependencies (see for example here). And using tools like Bndtools you don’t need these workarounds. And in first place I agree with that. But unfortunately it is not possible (or only hard to achieve) to use Bndtools for Eclipse application development. Mainly because in plain OSGi, Eclipse features, applications and products are not known. Therefore also the feature based update mechanism of p2 is not usable. But I don’t want to start the discussion PDE vs. Bndtools. That is worth another (series) of posts. In my opinion the real issue in the above scenario, and therefore also in org.eclipse.swt, is the wrong usage of fragments. Why is there a host bundle that only contains the OSGi meta information? After thinking a while about this, I realized that the only reason can be laziness! Users want to use Require-Bundle instead of configuring the several needed Import-Package entries. IMHO this is the only reason that the org.eclipse.swt bundle with the multiple platform dependent fragments exists. Let’s try to think about possible changes. Make every platform dependent fragment a bundle and configure the Export-Package manifest header for every bundle. That’s it on the provider side. If you wonder about the Eclipse-PlatformFilter manifest header, that works for bundles aswell as for fragments. So we don’t loose anything here. On the consumer side we need to ensure that Import-Package is used instead of Require-Bundle. This way we declare dependencies on the functionality, not the bundle where the functionality originated. That’s all! Using this approach, the workarounds mentioned above can be removed. PDE and Tycho are working as intended, as they can simply resolve bundle dependencies. I have to admit that I’m not sure about p2 regarding the platform dependent bundles. Would need to check this separately. Conclusion Having a look at the two initial statements about fragments a fragment is an extension to an existing bundle fragments are used to customize a bundle it is IMHO wrong to make API public available from a fragment. These statements could even be modified to become the following: a fragment is an optional extension to an existing bundle Having that statement in mind, things are getting even clearer when thinking about fragments. Here is another example to strengthen my statement. Guess you have a host bundle that already exports a package org.fipro.host. Now you have a fragment that adds an additional public class via that package, and in a consumer bundle that class is used. Using Bndtools or the workarounds for PDE and Tycho showed above, this should compile and build fine. But what if the fragment is not deployed or started at runtime? Since there is no constraint for the consumer bundle that would identify the missing fragment, the consumer bundle would start. And you will get a ClassNotFoundException at runtime. Personally I think that everytime a direct dependency to a fragment is introduced, there is something wrong. There might be exceptions to that rule. One could be to create a custom logging appender that needs to be accessible in other places, e.g. for programmatically configurations. As the logging appender needs to be in the same classloader as the logging framework (e.g. org.apache.log4j), it needs to be provided via fragment. And to access it programmatically, a direct dependency to the fragment is needed. But honestly, even in such a case a direct dependency to the fragment can be avoided with a good module design. Such a design could be for example to make the appender an OSGi service. The service interface would be defined in a separate API bundle and the programmatic access would be implemented against the service interface. Therefore no direct dependency to the fragment would be necessary. As I struggled several days with searching for solutions on fragment dependency issues, I hope this post can help others, solving such issues. Basically my solution is to get rid of all fragments that export API and make them either separate bundles or let them provide their API via services. If someone with a deeper knowledge in OSGi ever comes by this post and has some comments or remarks about my statements, please let me know. I’m always happy to learn something new or getting new insights. [Less]
Posted about 22 hours ago
Just over two weeks ago the Eclipse Che project released a beta version of their Che 4.0 release. We published an article introducing Eclipse Che in our Eclipse Newsletter so readers can learn more about the highlights of Che. The feedback in the ... [More] community has been pretty exciting to watch. On twitter, people are certainly creating a buzz about the future of the IDE. just tried #eclipse #che and the integration of #docker inside is f***ing awesome ! still a beta but #che is the future of developper IDE — m42™ (@michoo_42) February 6, 2016 #EclipseChe looks just amazing…. https://t.co/dvVa4gUX0G #Eclipse #Java #IDE — Alejandro Matos  (@amatosg) January 29, 2016 Holy shit, Eclipse’s next-gen IDE looks awesome https://t.co/6eXmNRB6RW — Matthew Hall (@qualidafial) February 2, 2016   InfoWorld is calling Eclipse Che the launch of the cloud ide revolution. The Eclipse Che GitHub repo has 1500 stars and 200 forks. There have been over 100,000 downloads of the Che beta so people are trying it out. The buzz is certainly growing around Eclipse Che. At EclipseCon in March you will be able to experience Eclipse Che first hand, including Tyler Jewell’s keynote address on the Evolution and Future of the IDE. If you are interested in the future of cloud IDEs then plan to attend EclipseCon     [Less]
Posted about 23 hours ago
The IoT industry is slowly but steadily moving from a world of siloed, proprietary solutions, to embracing more and more open standards and open source technologies. What’s more, the open source projects for IoT are becoming more and more integrated ... [More] , and you can now find one-stop-shop open source solutions for things like programming your IoT micro controller, or deploying a scalable IoT broker in a cloud environment. Here are the Top 5 Open Source IoT projects that you should really be watching this year. #1 – The Things Network LP-WAN technologies are going to be a hot topic for 2016. It's unclear who will win, but the availability of an open-source ecosystem around those is going to be key. The Things Network is a crowdsourced world-wide community for bringin LoRaWAN to the masses. Most of their backend is open-source and on Github. #2 – VerneMQ MQTT just got approved as an ISO standard. What else do you need to demonstrate that it's one of the key protocols for IoT? More open-source implementations! VerneMQ is a highly scalable MQTT broker written in Erlang that is getting lots of interest if you judge by its 500 stars on Github! #3 – RIOT OS RIOT is a very impressive realtime operating system for IoT, with a very active community. For the first time this year, they are organizing a RIOT Summit – that certainly tells something about the maturity of the project! #4 – Eclipse IoT I could not not include Eclipse IoT in the list! ;-) The thing is, there really is a lot of cool stuff happening right now, and I think 2016 will be exciting to watch for Eclipse IoT. In particular, we're moving to the cloud, and projects like Eclipse Hono will provide a great foundation for building OSS-based IoT backends. #5 – RHIOT It's not a typo, both RIOT and... RHIOT in the same Top 5! Red Hat is already contributing to several open-source projects very relevant in an IoT context (e.g Apache Camel), and RHIOT is an interesting approach for implementing end-to-end IoT messaging. <> Note: you can click on the pictures to learn more!   What about you? What are the projects you think are going to make a difference in the months to come? In case you missed it, the upcoming IoT Summit, co-located with EclipseCon North America, is a great opportunity for you to learn about some of the projects mentioned above, so make sure to check it out! [Less]
Posted 1 day ago
On the weekend I’ve worked on my API to interface with the Typescript language service from my Java code. While the initial version I developed some months ago used the “tsserver” to communicate with the LanguageService I decided to rewrite that and ... [More] to interface with the service directly (in memory or through an extra process). For the in memory version I implemented 2 possible ways to load the JavaScript sources and call them Nashorn V8(with the help of j2v8) I expected that Nashorn is slower than V8 already but after having implemented a small (none scientific) performance sample the numbers show that Nashorn is between 2 and 4 times slower than V8 (there’s only one call faster in Nashorn). The sample code looks like this: public static void main(String[] args) { try { System.err.println("V8"); System.err.println("============"); executeTests(timeit("Boostrap", () -> new V8Dispatcher())); System.err.println(); System.err.println("Nashorn"); System.err.println("============"); executeTests(timeit("Nashorn", () -> new NashornDispatcher())); } catch (Throwable e) { e.printStackTrace(); } } private static void executeTests(Dispatcher dispatcher) throws Exception { timeit("Project", () -> dispatcher.sendSingleValueRequest( "LanguageService", "createProject", String.class, "MyProject").get()); timeit("File", () -> dispatcher.sendSingleValueRequest( "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample.ts")).get()); timeit("File", () -> dispatcher.sendSingleValueRequest( "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample2.ts")).get()); timeit("Outline", () -> dispatcher.sendMultiValueRequest( "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_0").get()); timeit("Outline", () -> dispatcher.sendMultiValueRequest( "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_1").get()); } Provides the following numbers: V8 ============ Boostrap : 386 Project : 72 File : 1 File : 0 Outline : 40 Outline : 10 Nashorn ============ Nashorn : 4061 Project : 45 File : 29 File : 2 Outline : 824 Outline : 39 The important numbers to compare are: Bootstrap: ~400ms vs ~4000ms 2nd Outline: ~10ms vs ~40ms So performance indicates that the service should go with j2v8 but requiring that as hard dependency has the following disadvantages: you need to ship different native binaries for each OS you want to run on you need to ship v8 which might/or might not be a problem So the strategy internally is that if j2v8 is available we’ll use v8, if not we fallback to the slower nashorn, a strategy I would recommend probably for your own projects as well. If there are any Nashorn experts around who feel free to help me fix my implementation [Less]
Posted 1 day ago by nore...@blogger.com (David Bosschaert)
Inspired by my friend Philipp Suter who pointed me at this wired article http://www.wired.com/2016/02/rebuilding-modern-software-is-like-rebuilding-the-bay-bridge which relates to Martin Fowler's Branch by Abstraction I was thinking: how would this ... [More] work in an OSGi context?Leaving aside the remote nature of the problem for the moment, let's focus on the pure API aspect here. Whether remote or not really orthogonal... I'll work through this with example code that can be found here: https://github.com/coderthoughts/primesLet's say you have an implementation to compute prime numbers:public class PrimeNumbers {  public int nextPrime(int n) {    // computes next prime after n - see https://github.com/coderthoughts/primes details    return p;  }}And a client program that regularly uses the prime number generator. I have chosen a client that runs in a loop to reflect a long-running program, similar to a long-running process communicating with a microservice: public class PrimeClient {  private PrimeNumbers primeGenerator = new PrimeNumbers();  private void start() {    new Thread(() -> {      while (true) {        System.out.print("First 10 primes: ");        for (int i=0, p=1; i<10; i++) {          if (i > 0) System.out.print(", ");          p = primeGenerator.nextPrime(p);          System.out.print(p);        }        System.out.println();        try { Thread.sleep(1000); } catch (InterruptedException ie) {}      }    }).start();  }   public static void main(String[] args) {    new PrimeClient().start();  }}If you have the source code cloned or forked using git, you can run this example easily by checking out the stage1 branch and using Maven:.../primes> git checkout stage1.../primes> mvn clean install... maven output[INFO] ------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------Then run it from the client submodule:.../primes/client> mvn exec:java -Dexec.mainClass=\org.coderthoughts.primes.client.PrimeClient... maven output First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31... and so on ...Ok so our system works. It keeps printing out prime numbers, but as you can see there is a bug in the output. We also want to replace it in the future with another implementation. This is what the Branch by Abstraction Pattern is about.In this post I will look at how to do this with OSGi Services. OSGi Services are just POJOs registered in the OSGi Service Registry. OSGi Services are dynamic, they can come and go, and OSGi Service Consumers dynamically react to these changes, as well see. In the following few steps will change the implementation to an OSGi Service. Then we'll update the service at runtime to fix the bug above, without even stopping the service consumer. Finally we can replace the service implementation with a completely different implementation, also without even stopping the client.Turn the application into OSGi bundlesWe'll start by turning the program into an OSGi program that contains 2 bundles: the client bundle and the impl bundle. We'll use the Apache Felix OSGi Framework and also use OSGi Declarative Services which provides a nice dependency injection model to work with OSGi Services.You can see all this on the git branch called stage2:.../primes> git checkout stage2.../primes> mvn clean installThe Client code is quite similar to the original client, except that it now contains some annotations to instruct DS to start and stop it. Also the PrimeNumbers class is now injected instead of directly constructed via the @Reference annotation. The greedy policyOption instructs the injector to re-inject if a better match becomes available: @Componentpublic class PrimeClient {  @Reference(policyOption=ReferencePolicyOption.GREEDY)  private PrimeNumbers primeGenerator;  private volatile boolean keepRunning = false;   @Activate  private void start() {    keepRunning = true;    new Thread(() -> {      while (keepRunning) {        System.out.print("First 10 primes: ");        for (int i=0, p=1; i<10; i++) {          if (i > 0) System.out.print(", ");          p = primeGenerator.nextPrime(p);          System.out.print(p);        }        System.out.println();        try { Thread.sleep(1000); } catch (InterruptedException ie) {}      }    }).start();  }   @Deactivate  private void stop() {    keepRunning = false;  }}The prime generator implementation code is the same except for an added annotation. We register the implementation class in the Service Registry so that it can be injected into the client:@Component(service=PrimeNumbers.class)public class PrimeNumbers {  public int nextPrime(int n) {    // computes next prime after n    return p;  }}As its now an OSGi application, we run it in an OSGi Framework. I'm using the Apache Felix Framework version 5.4.0, but any other OSGi R6 compliant framework will do.> java -jar bin/felix.jarg! start http://www.eu.apache.org/dist/felix/org.apache.felix.scr-2.0.2.jarg! start file:/.../clones/primes/impl/target/impl-0.1.0-SNAPSHOT.jarg! install file:/.../clones/primes/client/target/client-0.1.0-SNAPSHOT.jarNow you should have everything installed that you need:g! lbSTART LEVEL 1 ID|State |Level|Name 0|Active | 0|System Bundle (5.4.0)|5.4.0 1|Active | 1|Apache Felix Bundle Repository (2.0.6)|2.0.6 2|Active | 1|Apache Felix Gogo Command (0.16.0)|0.16.0 3|Active | 1|Apache Felix Gogo Runtime (0.16.2)|0.16.2 4|Active | 1|Apache Felix Gogo Shell (0.10.0)|0.10.0 5|Active | 1|Apache Felix Declarative Services (2.0.2)|2.0.2 6|Active | 1|impl (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT 7|Installed | 1|client (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOTWe can start the client bundle:g! start 7First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31... and so on ..You can now also stop the client:g! stop 7Great - our OSGi bundles work :)Now we'll do what Martin Fowler calls creating the abstraction layer.Introduce the Abstraction Layer: the OSGi ServiceGo to the branch stage3 for the code:.../primes> git checkout stage3.../primes> mvn clean installThe abstraction layer for the Branch by Abstraction pattern is provided by an interface that we'll use as a service interface. This interface is in a new maven module that creates the service OSGi bundle.public interface PrimeNumberService {    int nextPrime(int n);}Well turn our Prime Number generator into an OSGi Service. The only difference here is that our PrimeNumbers implementation now implements the PrimeNumberService interface. Also the @Component annotation does not need to declare the service in this case as the component implements an interface it will automatically be registered as a service under that interface:@Componentpublic class PrimeNumbers implements PrimeNumberService {    public int nextPrime(int n) {      // computes next prime after n      return p;    }}Run everything in the OSGi framework. The result is still the same but now the client is using the OSGi Service:g! lbSTART LEVEL 1   ID|State      |Level|Name    0|Active     |    0|System Bundle (5.4.0)|5.4.0    1|Active     |    1|Apache Felix Bundle Repository (2.0.6)|2.0.6    2|Active     |    1|Apache Felix Gogo Command (0.16.0)|0.16.0    3|Active     |    1|Apache Felix Gogo Runtime (0.16.2)|0.16.2    4|Active     |    1|Apache Felix Gogo Shell (0.10.0)|0.10.0    5|Active     |    1|Apache Felix Declarative Services (2.0.2)|2.0.2    6|Active     |    1|service (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT    7|Active     |    1|impl (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT    8|Resolved  |    1|client (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOTg! start 8First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31You can introspect your bundles too and see that the client is indeed wired to the service provided by the service implementation:g! inspect cap * 7org.coderthoughts.primes.impl [7] provides:-------------------------------------------...service; org.coderthoughts.primes.service.PrimeNumberService with properties:   component.id = 0   component.name = org.coderthoughts.primes.impl.PrimeNumbers   service.bundleid = 7   service.id = 22   service.scope = bundle   Used by:      org.coderthoughts.primes.client [8]Great - now we can finally fix that annoying bug in the service implementation: that it missed 2 as a prime! While we're doing this we'll just keep the bundles in the framework running...Fix the bug in the implementation whitout stopping the clientThe prime number generator is fixed in the code in stage4:.../primes> git checkout stage4.../primes> mvn clean installIt's a small change to the impl bundle. The service interface and the client remain unchanged. Let's update our running application with the fixed bundle:First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31g! update 7 file:/.../clones/primes/impl/target/impl-1.0.1-SNAPSHOT.jarFirst 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29Great - finally our service is fixed! And notice that the client did not need to be restarted! The  DS injection, via the @Reference annotation, handles all of the dynamics for us! The client code simply uses the service as a POJO.The branch: change to an entirely different service implementation without client restartBeing able to fix a service without even restarting its users is already immensely useful, but we can go even further. I can write an entirely new and different service implementation and migrate the client to use that without restarting the client, using the same mechanism. This code is on the branch stage5 and contains a new bundle impl2 that provides an implementation of the PrimeNumberService that always returns 1. .../primes> git checkout stage5.../primes> mvn clean installWhile the impl2 implementation obviously does not produce correct prime numbers, it does show how you can completely change the implementation. In the real world a totally different implementation could be working with a different back-end, a new algorithm, a service migrated from a different department etc...Or alternatively you could do a façade service implementation that round-robins across a number of back-end services or selects a backing service based on the features that the client should be getting.In the end the solution will always end up being an alternative Service in the service registry that the client can dynamically switch to.So let's start that new service implementation:First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29g! start file:/.../clones/primes/impl2/target/impl2-1.0.0-SNAPSHOT.jarFirst 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29g! stop 7First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1Above you can see that when you install and start the new bundle initially nothing will happen. At this point both services are installed at the same time. The client is still bound to the original service as its still there and there is no reason to rebind, the new service is no better match than the original. But when the bundle that provides the initial service is stopped (bundle 7) the client switches over to the implementation that always returns 1. This switchover could happen at any point, even halfway thought the production of the list, so you might even be lucky enough to see something like:First 10 primes: 2, 3, 5, 7, 11, 13, 1, 1, 1, 1I hope I have shown that OSGi services provide an excellent mechanism to implement the Branch by Abstraction pattern and even provide the possibility to do the switching between suppliers without stopping the client!In the next post I'll show how we can add aspects to our services, still without modifying or even restarting the client. These can be useful for debugging, tracking or measuring how a service is used.PS - Oh, and on the remote thing, this will work just as well locally or Remote. Use OSGi Remote Services to turn your local service into a remote one... For available Remote Services implementations see https://en.wikipedia.org/wiki/OSGi_Specification_Implementations#100:_Remote_ServicesWith thanks to Carsten Ziegeler for reviewing and providing additional ideas. [Less]
Posted 4 days ago
Like many of my Eclipse stories, it starts during a coffee break. Have you seen the new TODO template I have configured for our project? Yes. It is nice…​ But I hate having to set the date manually. I ... [More] know but it is not possible with Eclipse. …​ A quick search on Google pointed me to Bug 75981. I was not the only one looking for a solution to this issue: How to set the Eclipse date variable format? Templates in Eclipse Tweet by @Grummfy Tweet by @Sigasi By analyzing the Bugzilla history I have noticed that already 2 contributors have started to work on this (a long time ago) and the feedback to the latest patch never got any answers. I reworked the last proposal…​ and…​ I am happy tell you that you can now do the following: Short description of the possibilities: As before you can use the date variable with no argument. Example: ${date} You can use the variable with additional arguments. In this case you will need to name the variable (since you are not reusing the date somewhere else, the name of the variable doesn’t matter). Example: ${mydate:date} The first parameter is the date format. Example: ${d:date('yyyy-MM-dd')} The second parameter is the locale. Example: ${maDate:date('EEEE dd MMMM yyyy HH:mm:ss Z', 'fr')} Back to our use case, it now works as expected: Do not hesitate to try the feature and to report any issue you can find. The fix is implemented with the M5 milestone release of Eclipse Neon. You can download this version now here: http://www.eclipse.org/downloads/index-developer.php This experiment was also a great opportunity for me to measure how the development process at Eclipse has been improved: With Eclipse Oomph (a.k.a Eclipse Installer) it is possible setup the Workspace to work on "Platform Text" very quickly With Gerrit it is much easier for me (a simple contributor) to work with the commiters of the project (propose a patch, discuss each line, push a new version, rebase on top of HEAD…​) With the Maven Build, the build is reproducible (I never tried to build the platform with the old PDE Build, but I believe that this was not possible for somebody like me) Where I spend most of the time: Analysis of the proposed patches and existing feedbacks in Bugzilla Figure out how I could add some unit tests (for the existing behaviour and for the new use cases). This was a great experience for me and I am really happy to have contributed this fix. [Less]
Posted 4 days ago
Like many of my Eclipse stories, it starts during a coffee break. Have you seen the new TODO template I have configured for our project? Yes. It is nice…​ But I hate having to set the date manually. I ... [More] know but it is not possible with Eclipse. …​ A quick search on Google pointed me to Bug 75981. I was not the only one looking for a solution to this issue: How to set the Eclipse date variable format? Templates in Eclipse Twitt by @Grummfy Twitt by @Sigasi By analyzing the Bugzilla history I have noticed that already 2 contributors have started to work on this (a long time ago) and the feedback to the latest patch never got any answers. I reworked the last proposal…​ and…​ I am happy tell you that you can now do the following: Short description of the possibilities: As before you can use the date variable with no argument. Example: ${date} You can use the variable with additional arguments. In this case you will need to name the variable (since you are not reusing the date somewhere else, the name of the variable doesn’t matter). Example: ${mydate:date} The first parameter is the date format. Example: ${d:date('yyyy-MM-dd')} The second parameter is the locale. Example: ${maDate:date('EEEE dd MMMM yyyy HH:mm:ss Z', 'fr')} Back to our use case, it now works as expected: Do not hesitate to try the feature and to report any issue you can find. The fix is implemented with the M5 milestone release of Eclipse Neon. You can download this version now here: http://www.eclipse.org/downloads/index-developer.php This experiment was also a great opportunity for me to measure how the development process at Eclipse has been improved: With Eclipse Oomph (a.k.a Eclipse Installer) it is possible setup the Workspace to work on "Platform Text" very quickly With Gerrit it is much easier for me (a simple contributor) to work with the commiters of the project (propose a patch, discuss each line, push a new version, rebase on top of HEAD…​) With the Maven Build, the build is reproducible (I never tried to build the platform with the old PDE Build, but I believe that this was not possible for somebody like me) Where I spend most of the time: Analysis of the proposed patches and existing feedbacks in Bugzilla Figure out how I could add some unit tests (for the existing behaviour and for the new use cases). This was a great experience for me and I am really happy to have contributed this fix. [Less]
Posted 4 days ago
Our first official Jubula standlone release of the year is 8.2.2 - and it's got a lot of exciting new features! From "beta" to "official" Just before Christmas, we released a Jubula beta version that had some pretty awesome stuff in it ... [More] (I'll get to what it is in a moment). I was so excited about the new features, that we decided to add a couple more that were in progress, then release it as an official version. That version is 8.2.2, and it can now be downloaded from the testing portal. The highlights The short version is that everything you've seen in beta releases since the end of October 2015 is now in the release. The longer version is much more exciting. Copy and paste I actually never thought I'd write these lines, but we have indeed added copy and paste support to the Jubula ITE. You can now copy Test Cases, Test Suites, Test Steps, and Event Handlers between editors. Why now? Well, I have been listening to the people who requested this over the years, and we have a new team member who needed a nice starter topic to work on. I still personally think it's evil - you all know by now that we'd much prefer you to structure tests to be reusable and readable. Nevertheless, we hope you enjoy the new feature . Time reduction when saving We've moved our completeness checks to a background job, so saving things doesn't block your continuing work as it had done previously. Set Object Mapping Profile for individual technical names Our standard object mapping profile is pretty amazing - it's heuristic, so even unnamed components can be located in an application. Sometimes though, you end up having to remap individual items more frequently and you ask the developers to name them. Now it's possible to specify for individual technical names that the component recognition for this name should only be based on its name. That way, you don't have to name everything, but can use the "Given Names" profile for technical names you know are set. This function is also available in the Client API. New Test Steps for executing Java methods in the application context Sometimes you just want to directly call a method you know is available in your application, or for a specific component. The new invoke method actions let you do just that. You can specify the class name and method name, as well as parameters - and you can execute the action either on the application in general or on specific components. Multi-line comments in editors There is a new option to add a comment node in the Test Case Editor and Test Suite Editor. The comments are shown directly in the editor, and you can use them to comment following nodes. This is in contrast to the descriptions, which are only shown for a selected node. New dbtool options The dbtool, for executing actions directly on the database, has two new options. You can now delete all test result summaries (including details) for a specific time or project, and you can just delete details for test result summaries for a time frame or project. Oomph setup In case you missed it, there is also an Oomph setup for Jubula. As you can see, it's been a busy few months. Development continues, and our next beta release will contain updates to the JaCoCo support and HTML support, amongst other things. Happy testing! [Less]